コンテンツにスキップ

英文维基 | 中文维基 | 日文维基 | 草榴社区

利用者:Was a bee/IITC

ジュリオ・トノーニ 『意識の情報統合理論』BMC Neuroscience 2004, 5:42

これはcc-by-2.0のライセンスの元発行されている、オープンアクセス論文の邦訳です。日本語版WikipediaのユーザーWas a beeが、記事の執筆のさいの参照用途で翻訳しているもので、日本語として練られた文章ではありませんし、また正確なものでもありません。内容の詳細は以下の原論文をご参照ください。

  • Giulio Tononi "An information integration theory of consciousness" BMC Neuroscience 2004, 5:42, doi:10.1186/1471-2202-5-42 [1]
© 2004 Tononi; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

論文の流れ

[編集]

背景

[編集]

この節では研究の背景を説明する。意識は二つの大きな問題を提起する。ひとつめは、どういった種類のシステムが意識体験をもつことになるのかを決定している条件、についての問題である。例えば、なぜ私達の意識は脳の特定の部位(視床皮質といった)から生まれており、他の部分(小脳といった)からではないのか、またなぜ夢を見ずに眠っている状態ではなく目覚めているときに意識があるのか、といったことである。そして二つ目は、あるシステムが持つ意識の内容を決めている条件は一体何なのか、という問題である。たとえば、なぜ脳の特定の部位が意識体験に対して特定の質感でもって(視覚や聴覚といった)寄与しているのか、ということである。

仮説の提唱

[編集]

この節では、意識とは何なのか、そしてそれをどう計量するのか、についての仮説を提示する。当理論において、意識とは、あるシステムがもつ情報を統合する能力、に対応している。この考えは意識の持つ二つの現象学的性質から着想を得ている。それは差異(非常に多様な意識体験の利用可能性)と統合(そうした体験の統一性)である。この理論では、システムで利用可能な意識の量は、要素の複合体が持つΦ値、で測られる。Φは、要素群の部分集合内において情報量的に最も弱い結合、を通して統合される、因果的な有効情報量である。複合体とはゼロより大きいΦ値(Φ>0)を持つ、要素で構成される部分集合である(ただしより大きいΦ値を持つ部分集合の部分は除く)。そして当理論においては、意識の質的内容は、複合体の要素間にある情報的な関係(要素間の有効情報量によって特定される)によって決定されている、と考える。つまりは、それぞれの特定の意識体験は、任意の時点における、複合体の要素内の情報的相互作用を媒介している変数の値によって特定される。

仮説のテスト

[編集]

この節では、情報統合理論は、大づかみな形ではあれ、意識と関連する神経生理学的な観測結果のいくつかを説明する、ということを述べる。 それはたとえば、意識が特定の神経系と結びついていて他の部分と結びついていないこと、意識を支える神経プロセスが意識に上らない神経プロセスに影響を与えたりそれらから影響を受けたりすること、夢を見ない睡眠時での意識の低下や一般的な発作、そして意識を支える神経的相互作用に必要な時間的条件、などである。

仮説の示唆すること

[編集]

この節では、当理論によると意識は一つの基礎的な量になる、という点について説明する。それは序列をもち、赤ちゃんや動物にもあらわれ、そして意識をもった人工物の作成も可能だということになる。

背景

[編集]

意識とは私達が経験する全てである。いったい何が私達を夢なき眠りに落ちる夜と目覚めの朝へ還りへと導くのだろう[1]。もし意識がなければ、私達が考える限りでは、外の世界も自分自身もない、つまり全く何もないだろう。意識を理解しようとすると、二つの大きな問題が出てくる[2][3]。最初の問題は、どういった種類のシステムが意識体験をもつことになるのかを決定している条件、についてである。例えば、なぜ同じように豊富なニューロンとコネクションがありながら、ある特定の脳の領域は意識にとって重要で他の部分はそうではないのか?また脳の活動はどちらでも活発であるにも関わらず、なぜ起きているときや夢を見ている時に意識があって、夢なき眠りのときはほとんどないなのか?といった問題である。二つ目の問題は、あるシステムがどういった種類の意識をもっているか、という問題である。たとえば、私達の意識体験を特徴付けている、特定の互いに還元不可能に見える質感のモダリティー(視覚、聴覚、痛み)、そしてサブモダリィテー(視覚における色と動き)と次元(赤と青)は一体何が決定しているのか?なぜ色はあのように見えて、音楽が聞こえるような感じや、痛みの感じと異なるのか。第一の問題が解ければ、どういった物理的システムが意識を生むのか、そしてそれがどれだけの量またはレベルの意識なのか、が分るだろう。そして第二の問題が解ければ、発生した意識がどういった質・内容のものなのかが分るだろう。

仮説の提唱

[編集]

最初の問題:どういった種類のシステムが意識をもつかを決めるのは何か?

[編集]

私達はみな知っている、起きると意識が現われ、眠ると消えることを。また私達は、頭を殴られたり、ある種の薬物(例えば麻酔薬)を摂取すると「意識を失う」ということも知っている。こうした日常的な経験は、意識が物理的な基盤を持っており、かつそうした物理的基盤は、私達に完全に意識があるときは適切に稼動している事、ということを示唆している。日常的な経験は次のような更に一般的な考えに私達を誘う。どれだけの量の意識が現われるか、を決めている条件は一体なんだろうか、と。例えば、生まれたての赤ちゃんは意識があるだろうか?あるとしてどれぐらいの量の意識だろうか?また動物にも意識があるだろうか?あるとして、ある種の動物は他の種の動物より、より意識的なのだろうか?彼らは痛みを感じられるだろうか?非ニューロン的な材料で、意識をもった人工物を作り出せるだろうか?無動性無言症の人(目は開けているが、黙っており、ほぼ無反応で動かない)に意識はあるのか、ないのか?夢遊病や精神運動発作が起きている間、どれぐらいの意識があるのだろうか?こうした疑問への答えを得るためには、実証的研究の理論的分析が助けになるだろう。

意識と情報統合性

[編集]

この論文で提示されている理論では、意識というのは情報の統合能力として扱われる。しかしこの主張は、意識を与えられていて、かつそれをさも当然のことかのように考えている私達人間にとって(ひょっとするとそれだからこそ)、それほど自明な主張ではない。いくつかの視点を得るために、主観的経験のもつカギとなる性質(情報性、単一性、時間的空間的スケール)を描写していくれるような思考実験が有用だろう。

情報量
[編集]

次のような思考実験を考えて見てほしい。あなたは点滅を繰り返す何も描かれていないスクリーンに対峙している。そして、もしスクリーンが点いていれば「明るい」、もし点いていなければ「暗い」、そう発言するよう指示されていたとしよう。このとき、フォトダイオード(光を検知できるデバイス)も、一緒にスクリーンの前に置かれていたとしよう。そしてフォトダイオードはスクリーンが光っている間はブザーを鳴らし、スクリーンが消えているときは静かにしているようにセットしたとしよう。意識に関する最初の問題がこの思考実験の中に含まれている。あなたがON状態のスクリーンとOFF状態のスクリーンを区別するとき、あなたは明かりや暗闇を「見る」という意識体験をもっている。そしてフォトダイオードもスクリーンのONとOFFを区別するが、しかし意識体験として明かりや暗闇を見ているわけではないだろう。さて、いったいあなたに「見る」経験をさせているダイオードとの違いというのは、一体なんだろうか?(脚注 i も参照せよ)

当論文で提示される理論によれば、カギとなる重要な違い、つまりあなたとフォトダイオードの違いは、区別が行われている際にどれだけの情報量が生まれているか、で取り扱われる。情報量は古典的には、複数の選択肢の中からあるひとつの結果が発生したとき不確定性がどれだけ減少したか、で定義される[4]。情報量はエントロピー関数、確率piで生起する事象の集まりに対して、についてpi対数の重み付きの和を取る関数 (i): H = - Σpilog2pi で計算される。だから曲がりのないちゃんとしたコインを投げて表が出れば、そこには二つの選択肢しかないので、1ビットの情報量を得たことに相当する。サイコロ(同様の確からしさをもつ結果が6種類ある)であれば、log2(6) ≈ 2.59 ビット の情報量を生む(H は選択肢のうちのどれかが他よりも生起しやすいような場合、減少する。例えば重りのついた偏りのあるサイコロの場合など。)

空っぽのスクリーンが点灯したとき、フォトダイオードは可能な二つの状態の内の一方に入りブザーを鳴らす。コインの時と同じように、これは一ビットの情報量に相当する。しかしあなたが真っ暗のスクリーンが点灯するのを見たときあなたが入る状態というのは、フォトダイオードとは違って、恐ろしいほどの数の可能な状態のうちのひとつである。フォトダイオードのレパートリーは最小限の違いだけだが、あなたの場合は莫大なものとなる。このことを考えるのは難しいことではない。例えば、スクリーンが一様に明るくなる場合の替わりに、次のような場合を考えて欲しい。様々な映画から切り取ってきた一フレームづつをランダムにつなぎ合わせて、次々と提示するような例を。これは今までに一度も製作されたことがないような映像となるだろう。何の苦労もなく、各フレームが提示されるたびに、あなたは違う状態に入り、そして違う画像を「見る」ことになる。これはつまりあなたが「明かりを見る」という状態に入ったとき、単に「暗闇をみる」状態だけを排除したのではなく、もっと莫大な量のありうる選択肢を排除したという事である。あなたがこの莫大な量の別の選択肢について考えていようがいまいが(きっとあなたは考えないなかっただろう)、これはすごい量の情報量に相当しているのである(脚注 iiも見よ)。この点はあまりに単純であるため、見過ごされてきた。

統合性
[編集]

あなたが持つ莫大な数の状態は、フォトダイオードとの大きい違いの一つではあるが、しかしそれだけでは意識体験の出現を説明するのには十分ではない。これが何故かを見て見よう。一メガピクセルの理想化されたデジタルカメラを考えてほしい。このデジタルカメラのセンサーチップは基本的に百万個のフォトダイオードの集まりでしかない。このデジタルカメラは、それぞれのフォトダイオードが二つの状態しか持たないとしても、21,000,000個という莫大な量の区別される状態を持つ(これは 1,000,000 ビットに相当する)。色々な映画からとってきたフレームをランダムにつなぎ合わせた映像を提示すれば、確かにデジタルカメラは時々刻々、容易に異なる状態のどれかに入っていく。しかしだれもデジタルカメラが意識をもっているとは信じていない。あなたとデジタルカメラの間のカギとなる違いとはなんだろうか?

当論文で提示される理論によれば、あなたとカメラの間のカギとなる違いは、情報の統合性である。外部の観察者の視点から考えたとき、様々な可能な入力信号を与えさえすれば分るように、確かにカメラのチップは莫大な数の異なる状態に入っていく。しかしながら、センサーチップは21,000,000個の状態を持つ統合された一つのシステムというよりも、単に、二つの状態しか持たない百万個のフォトダイオードの集まりでしかないと言える。これはなぜなら、センサーチップ上のフォトダイオード内に相互作用がなく、どんな情報もそこで統合されていないからである。仮にセンサーチップが一つ一つのフォトダイオードへと切り分けられた場合でも、カメラの性能は何一つ変わるところがない。

これとは対照的に、あなたが持つ状態のレパートリーは、個々の部分的要素が持つ状態のレパートリーに分解することは出来ない。これはなぜならあなたの脳内にある各要素間において大量の因果的相互作用があり、各要素の取る状態は別の要素の状態に因果的に依存しあっている、つまり各要素の間で情報が統合されているからである。確かに、カメラのセンサーにあるフォトダイオードを切り分けた時と違い、あなたの意識を支える脳内の各要素間の接続を切断してしったら、それは悲惨な結果をもたらすだろう。この意識体験における情報の統合性は、現象学的にも明らかである。例えばあなたが意識的にある画像を「見た」時、画像は全体的に統合されて体験されるのであり、独立して経験されるバラバラの部分的画像、へと分離することはできないし、またどれだけ頑張ってみても、あなたは形と独立に色を体験することはできない。また視野の右半分を左半分と別に独立して経験することも出来ない。そうしたことをするには、二つの脳半球の間の情報統合を防ぐために、脳を物理的に二つに分離するしかない。しかしこうした分離脳手術は二つの意識体験の主体を生み、そしてそのそれぞれがより小さい状態のレパートリーとより限定的なパフォーマンスを持つことになる[5]

時間的・空間的な性質
[編集]

最後に、意識体験の広がりが持つ特徴的な時間的・空間的スケールについて理解しておくことは大切だろう。例えば、意識体験は時間的に、それより早くも遅くもない、ある特定のスピードで流れていく。これを見るのは難しいことではない、百倍に加速して早回しで映画を再生してみても、また逆に減速してスローモーションで映画を再生してみても、あなたの経験の速度というのを加速させたり減速させたり、ということはできない。認知がマイクロジェネシスと呼ばれるプロセスを通じて特定・固定されていく過程についての研究が示唆することは、完全な感覚経験が生成されるまでには100-200ミリ秒程度の時間を要する、ということである。意識的思考が浮上するにはもっと長い時間がかかるとされる[6]。実際、視覚的認知の発生は、写真の焼付けとなにかしら似ている。最初は単に何が変化したという気づきだけがあり、それからそれが視覚的なものだ(そう、聴覚的なものではない)と気づく。そして動き、位置、大まかなサイズ、そして色、形と、徐々に基本的な特徴が現われてくる。そして対象の全体が形成され認知が形成される。この過程は明らかに、より区別されてない状態から、より区別されている状態へと進む[6]。またある研究から、ひとつの意識的瞬間が、2-3秒を超えて持続することはないということが示唆される[7]。意識が離散的なスナップショットのつながりのようなものなのか、それとも連続した流れのようなものなのかという点についてはまだ議論がなされているが、その時間的スケールは確実にこれらの上限と下限の間で成立しているだろう。かくして、現象学的分析からも示唆されるように、意識はその特直的な時間的空間的スケールで発生する、大量の情報統合を行う能力として扱っていかなければならない。

情報を統合する能力を測る:複合体のΦ値

[編集]

もし意識が情報を統合する能力と対応しているなら、物理的なシステムは、それが大量の可能な状態のレパートリー(情報量)を持っており、かつ因果的に独立な部分系に分解できない(統合性)場合には、それなりの量の意識を持たねばならない。ではいったいどうやってそうした統合されたシステムを判別することが出来るだろうか?そしてまたどうやって可能な状態のレパートリー数を計算すればいいだろうか[2][8]

上でも述べたように、システムが持つ可能な状態のレパートリーを計量するには、エントロピー関数を使うことが出来る。しかしこの方法で測った情報量は、情報が統合されているかどうかと全く無関係な量となる。そのためエントロピーでの計量では、二つの状態だけを持つ1,000,000個のフォトダイオードと、21,000,000個の状態のレパートリーを持つ統合された単一のシステムを区別することが出来ない。重要なのは、要素の部分集合が因果的に統合されたシステムを構成しているのか、それともまったく情報的に統合されていないいくつかの独立の(または半独立の)部分集合に分けられているのか、その点を判別することである。

いくつかの要素からなる非常に単純化されたシステムについて考えることで、どうすればこの目的を達成できるのか見てみよう。物事をより具体的にするために、私達は神経系を扱っているのだと仮定しよう。要素としては、例えば共通の入出力を有する相互に接続された局所的なニューロンの集まりを想定しよう(例えば皮質のミニコラムのような)。さらにそれぞれの要素は数百ミリ秒 持続する 様々な発火レベルによって、離散的ないくつかの状態に分けられるとしよう。さらに現状の目的のため、夢を見ているときのように、システムは完全に環境から遮断されていて、外部からの入力がないとしよう。

有効情報量
[編集]

こうしたシステムからいくつかの要素を取り出して、部分集合Sを作って見よう。この部分集合の持つ因果的相互作用を図1aに描いた。私達が計算したいのは、このシステムSが、あり得る状態のレパートリーの中からその一つへ入ったときに生まれる情報量である。そしてそれは各状態がシステム内での因果的相互作用の結果であることを反映した、統合性を含んだ情報量でなければならない。どうすればそうした量が得られるだろうか?ひとつの方法はシステムSを二つの部分AとBに分けることである。そしてAからBへのあり得る全ての入力に対して、引き起こされるBの反応を評価することである。神経学的な言葉で言い換えると、Aの出力に由来する全ての可能な発火パターンの組み合わせを試行して、それがB上の発火パターンのレパートリーに与える分化の程度を見ることである。情報理論の言葉でいうなら、Aの出力に最大エントロピー(AHmax)を与えて(つまりAを独立した雑音源で置換する)、Aからの入力に対するBの反応のエントロピーを決定することである。ここでAとBの間の有効情報量を EI(A→B) = MI(AHmax;B) で定義する。このとき MI(A;B) = H(A) + H(B) - H(AB) であり、相互情報量(情報源AとターゲットBの間で共有されているエントロピーもしくは情報量の程度)をあらわす。 ひとつ注意点として、この時Aは独立雑音源で置き換えられているので、BからAへの因果作用は全くない。そのためAとBの間で共有されているエントロピーは、完全にAからBへの因果作用に基づくものだけとなる。さらに もう一点注意として、EI(A→B)の計算は、AからBへの全ての可能な影響を考えるものであり、システムがそのまま放置された場合に観測されうるような状況だけ、について考えるのではない。また一般に EI(A→B) と EI(B→A) は対称ではない。そして最後に EI(A→B) の値はAHmax か BHmax の小さい方の値に束縛される。まとめるとEI(B→A)を測るためには、まずBからの出力に最大エントロピーを適用し、Bからの入力によって引き起こされるAの反応のエントロピーを決定すればよい[訳注 原文ではこの一文でAとBが逆になっていた。]。定義からも明らかなことだが、ABの間の結合が強く、かつより分化しているほど、EI(A→B)の値は大きくなる。ここで分化しているというのは、異なるAからの出力が、Bに異なる発火パターンを引き起こすということである。逆にAからBにほとんど影響を与えない場合、またはAからの出力に対しBが同じようにしか反応しない場合、EI(A→B)の値は小さくなる。これで部分集合のニ分割が与えられたとき、双方向の有効情報量の合計 EI(A B) = EI(A→B) + EI(B→A) を定義できる。EI(A B) はAからBへの、そしてBからAへの可能な因果的作用のレパートリの指標である。

図1
図1 有効情報量、最小情報ニ部分分割、複合体
a. 有効情報量。システムX(黒い楕円)の一部分である要素数4の部分集合S(青い円 {1,2,3,4})。部分集合Sは二部分分割(灰色の点線、{1,3}/{2,4})によってAとBの二つのグループに分けられている。矢印は二部分分割をまたいだ因果的に有効な接続を表している(A、Bと、Xの残りの部分をつないでいる他の接続は示されていない)。EI(A→B)を測るために、Aからの出て行く外向きの接続に最大エントロピーHmaxを挿入する(独立雑音源に相当する)。そしてAからの入力によってもたらされる、Bの状態のエントロピーを測る。注意点として、AからBへの影響は、二つの部分集合の間を直接つなぐ接続を通して与えられると同時に、Xを経由して間接的にも与えられる。EI(B→A)を測るにはBの側を最大エントロピーで置き換えればよい。ここでこの二部分分割の有効情報量は EI(A B) = EI(A→B) + EI(B→A) で与えられる。
b. 最小情報ニ部分分割。部分集合S = {1,2,3,4} を水平の二部分分割 {1,3}/{2,4} で分けると、EIの値は正の数値となる。しかし縦 {1,2}/{3,4} に分けた場合 EI = 0 であり、これが最小情報ニ部分分割(minimum information bipartition, MIB)となる。部分集合 S = {1,2,3,4} のこの他の二部分分割 {1,4}/{2,3}, {1}/{2,3,4}, {2}/{1,3,4}, {3}/{1,2,4}, {4}/{1,2,3}は、全て EI>0 である。
c. 複合体の分析。系Xの全ての部分集合について考えることで、系が持つ複合体を知る事ができ、かつそれら複合体をそれぞれのΦ値(それぞれの複合体での最小情報ニ分割における有効情報量)に応じて序列化することができる。系Xのいくつかの要素が結合を持たない場合を考えて見れば、一部の部分集合 {3,4} と {1,2} ではΦ値は正 Φ>0 だが、他の部分集合 {1,3}, {1,4}, {2,3}, {2,4}, {1,2,3}, {1,2,4}, {1,3,4}, {2,3,4}, {1,2,3,4} ではΦ値がゼロ Φ=0 になることは容易に見て取れる。ここで部分集合 {3,4} と {1,2} は、より大きいΦ値を持つ集合の部分ではないので、複合体になる。図中では灰色の楕円でこのことを表している(色がより黒っぽいのはΦ値が大きいことに対応している)。

Methodological note. In order to identify complexes and their Φ(S) for systems with many different connection patterns, each system X was implemented as a stationary multidimensional Gaussian process such that values for effective information could be obtained analytically (details in [8]). Briefly, in order to identify complexes and their Φ(S) for systems with many different connection patterns, we implemented numerous model systems X composed of n neural elements with connections CONij specified by a connection matrix CON(X) (no self-connections). In order to compare different architectures, CON(X) was normalized so that the absolute value of the sum of the afferent synaptic weights per element corresponded to a constant value w<1 (here w = 0.5). If the system's dynamics corresponds to a multivariate Gaussian random process, its covariance matrix COV(X) can be derived analytically. As in previous work, we consider the vector X of random variables that represents the activity of the elements of X, subject to independent Gaussian noise R of magnitude c. We have that, when the elements settle under stationary conditions, X = X * CON(X) + cR. By defining Q = (1-CON(X))-1 and averaging over the states produced by successive values of R, we obtain the covariance matrix COV(X) = <X*X> = <Qt * Rt * R * Q> = Qt * Q, where the superscript t refers to the transpose. Under Gaussian assumptions, all deviations from independence among the two complementary parts A and B of a subset S of X are expressed by the covariances among the respective elements. Given these covariances, values for the individual entropies H(A) and H(B), as well as for the joint entropy of the subset H(S) = H(AB) can be obtained as, for example, H(A) = (1/2)ln [(2π e)n|COV(A)|], where |•| denotes the determinant. The mutual information between A and B is then given by MI(A;B) = H(A) + H(B) - H(AB). Note that MI(A:B) is symmetric and positive. To obtain the effective information between A and B within model systems, independent noise sources in A are enforced by setting to zero strength the connections within A and afferent to A. Then the covariance matrix for A is equal to the identity matrix (given independent Gaussian noise), and any statistical dependence between A and B must be due to the causal effects of A on B, mediated by the efferent connections of A. Moreover, all possible outputs from A that could affect B are evaluated. Under these conditions, EI(A→B) = MI(AHmax;B). The independent Gaussian noise R applied to A is multiplied by cp, the perturbation coefficient, while the independent Gaussian noise applied to the rest of the system is given by ci, the intrinsic noise coefficient. Here cp = 1 and ci = 0.00001 in order to emphasize the role of the connectivity and minimize that of noise. To identify complexes and obtain their capacity for information integration, one considers every subset S of X composed of k elements, with k = 2,..., n. For each subset S, we consider all bipartitions and calculate EI(A B) for each of them. We find the minimum information bipartition MIB(S), the bipartition for which the normalized effective information reaches a minimum, and the corresponding value of Φ(S). We then find the complexes of X as those subsets S with Φ>0 that are not included within a subset having higher Φ and rank them based on their Φ(S) value. The complex with the maximum value of Φ(S) is the main complex. MATLAB functions used for calculating effective information and complexes are at http://tononi.psychiatry.wisc.edu/informationintegration/toolbox.html webcite.

情報統合量
[編集]

Based on the notion of effective information for a bipartition, we can assess how much information can be integrated within a system of elements. To this end, we note that a subset S of elements cannot integrate any information (as a subset) if there is a way to partition S in two parts A and B such that EI(A B) = 0 (Fig. 1b, vertical bipartition). In such a case, in fact, we would clearly be dealing with at least two causally independent subsets, rather than with a single, integrated subset. This is exactly what would happen with the photodiodes making up the sensor of a digital camera: perturbing the state of some of the photodiodes would make no difference to the state of the others. Similarly, a subset can integrate little information if there is a way to partition it in two parts A and B such that EI(A B) is low: the effective information across that bipartition is the limiting factor on the subset's information integration capacity. Therefore in order to measure the information integration capacity of a subset S, we should search for the bipartition(s) of S for which EI(A B) reaches a minimum (the informational "weakest link")." Since EI(A B) is necessarily bounded by the maximum entropy available to A or B, min{EI(A B)}, to be comparable over bipartitions, should be normalized by Hmax(A B) = min{Hmax(A); Hmax(B)}, the maximum information capacity for each bipartition. The minimum information bipartition MIBA B of subset S – its 'weakest link' – is its bipartition for which the normalized effective information reaches a minimum, corresponding to min{EI(A B)/Hmax(A B)}. The information integration for subset S, or Φ(S), is simply the (non-normalized) value of EI(A B) for the minimum information bipartition: Φ(S) = EI(MIBA B). The symbol Φ is meant to indicate that the information (the vertical bar "I") is integrated within a single entity (the circle "O", see Appendix, iii).

複合体
[編集]

We are now in a position to establish which subsets are actually capable of integrating information, and how much of it (Fig. 1c). To do so, we consider every possible subset S of m elements out of the n elements of a system, starting with subsets of two elements (m = 2) and ending with a subset corresponding to the entire system (m = n). For each of them, we measure the value of Φ, and rank them from highest to lowest. Finally, we discard all those subsets that are included in larger subsets having higher Φ (since they are merely parts of a larger whole). What we are left with are complexes – individual entities that can integrate information. Specifically, a complex is a subset S having Φ>0 that is not included within a larger subset having higher Φ. For a complex, and only for a complex, it is appropriate to say that, when it enters a particular state out if its repertoire, it generates and amount of integrated information corresponding to its Φ value. Of the complexes that make up a given system, the one with the maximum value of Φ(S) is called the main complex (the maximum is taken over all combinations of m>1 out of n elements of the system). Some properties of complexes worth pointing out are, for instance, that a complex can be causally connected to elements that are not part of it (the input and output elements of a complex are called ports-in and ports-out, respectively). Also, the same element can belong to more than one complex, and complexes can overlap.

In summary, a system can be analyzed to identify its complexes – those subsets of elements that can integrate information, and each complex will have an associated value of Φ – the amount of information it can integrate (see Appendix, iv). To the extent that consciousness corresponds to the capacity to integrate information, complexes are the "subjects" of experience, being the locus where information can be integrated. Since information can only be integrated within a complex and not outside its boundaries, consciousness as information integration is necessarily subjective, private, and related to a single point of view or perspective [1,9]. It follows that elements that are part of a complex contribute to its conscious experience, while elements that are not part of it do not, even though they may be connected to it and exchange information with it through ports-in and ports-out.

時間上・空間上での情報統合量
[編集]

The Φ value of a complex is dependent on both spatial and temporal scales that determine what counts as a state of the underlying system. In general, there will be a "grain size", in both space and time, at which Φ reaches a maximum. In the brain, for example, synchronous firing of heavily interconnected groups of neurons sharing inputs and outputs, such as cortical minicolumns, may produce significant effects in the rest of the brain, while asynchronous firing of various combinations of individual neurons may be less effective. Thus, Φ values may be higher when considering as elements cortical minicolumns rather than individual neurons, even if their number is lower. On the other hand, Φ values would be extremely low with elements the size of brain areas. Time wise, Φ values in the brain are likely to show a maximum between tens and hundreds of milliseconds. It is clear, for example, that if one were to stimulate one half of the brain by inducing many different firing patterns, and examine what effects this produces on the other half, no stimulation pattern would produce any effect whatsoever after just a tenth of a millisecond, and Φ would be equal to zero. After say 100 milliseconds, however, there is enough time for differential effects to be manifested, and Φ would grow. On the other hand, given the duration of conduction delays and of postsynaptic currents, much longer intervals are not going to increase Φ values. Indeed, a neural system will soon settle down into states that become progressively more independent of the stimulation. Thus, the search for complexes of maximum Φ should occur over subsets at critical spatial and temporal scales.

To recapitulate, the theory claims that consciousness corresponds to the capacity to integrate information. This capacity, corresponding to the quantity of consciousness, is given by the Φ value of a complex. Φ is the amount of effective information that can be exchanged across the minimum information bipartition of a complex. A complex is a subset of elements with Φ>0 and with no inclusive subset of higher Φ. The spatial and temporal scales defining the elements of a complex and the time course of their interactions are those that jointly maximize Φ.

第二の問題:システムが持つ意識の種類を決めるのは何か

[編集]

仮に私達があるシステムが意識をもつということが分ったとしても、その意識の内容がどんなものかまでただちに明らかになるわけではない。最初の方でも述べたように、私達の意識は特定の、かつ互いに還元不可能な質を持っている。それはたとえば異なるモダリティー(例:視覚、聴覚、痛覚)、サブモダリティ(例:視覚における色と動き)、次元(赤や青という色)といったものである。一体何が色をそのような感じで(音楽が聞こえたり、痛みを感じるのとは違った感じで)体験させるのだろうか。そしてなぜ私達は「第六感」について、それがどのような感じか想像することが難しいのだろうか?また他人の意識経験について考えること、例えば才能ある音楽家がオーケストラの音を聞いたときあなたと同じように経験しているのだろうか、それともより豊かな経験をしているのだろうか、またコウモリの場合はどうなのだろう[9]。彼らに意識があるとしても、反響定位を通した世界の把握において、一体どのような経験をしているのだろうか?世界を見ているような感じなのか、それとも聞いているような感じなのか、それとも私達のまったくしらない経験をしているのか?私達はシステムが持つ意識の内容に自由度があることは知っているが、しかしそうしたシステムが持ちうる経験の内容を厳密に規定する何らかの条件、必要かつ十分ないくつかの条件もまたなければならない。これが意識に関する二つ目の問題である。

こうした問題をどのように扱うのがベストなのかは明らかではないが、しかし私達の意識の持つ量と質とが、ともに脳という物理的基盤が適切に機能することに依存しているのは確かである。人が物事を区別するある種の能力を獲得する過程について考えて欲しい。たとえばワインのテイスティングのプロになるような場合である。綿密な調査が示すところによると、そうした学習の過程は、単に体験した数多くのワインからもたらされる異なる感覚に、適切なラベルを貼り付けていくだけの作業ではない。むしろワインを飲んだときに発生する感覚が、より拡くなり、そして洗練されていってるのだと考えられる。同様の結果は仕事の一環で香水、色、音、触り心地などの区別の仕方を学んでいる人達の間でも見られる。また赤ちゃんが成長していく過程で行われる、認知的な学習についても考えてみて欲しい。赤ちゃんが経験するのは「がちゃがちゃとした混沌」のようなものでしかないだろうが(ミルクと水しか飲んだことがない人が赤ワインを飲んだ風に)、それでも赤ちゃんの認知能力は疑いなく驚異的な洗練を受ける。

これらの例は、私達の意識の質とレパートリーは、学習を通して変化しうることを示唆している。ここで大事なのは、そうした認知上の学習が、意識の物理的な基盤でのある種の変化(とりわけ視床皮質系での対応するニューロン群の間でおこる、接続パターンの再配置[10])に依存しているということである。意識体験の質と脳の構造の間の強い関係は、限りない数の神経科学上の研究から、証拠付けられている。例えば大脳皮質のとある領域にダメージを受けると、意識の他の部分については何ひとつ問題がないにもかかわらず、視覚上の運動を認識する能力だけが永遠に失われる。また別の部分にダメージを受けると、色だけが分らなくなる[11]。こうした事から明らかなように、脳の様々な部分を、視覚上の運動や色といった特定の意識体験に寄与させるような、何らかの構造がそこにはある。特に大切なこととして、運動や色の知覚を失わせる領域のうちのある部分は、その部分へのダメージによって同時に運動や色を覚えておくこと、想像すること、そして夢の中でみること、なども出来なくなる。これに対し、例えば網膜の損傷は私達から視覚を奪うが、生まれつき盲目なのでなければ、それは私達から色の記憶を奪わないし、また色を想像したり、夢で色を見たりすることを妨げることはない。つまり感覚器官ではなく、ある特定の皮質領域の構造に、私達がもつ意識体験の質を決定している「何か」がある。この「何か」とは何だろうか?

意識の質を、情報の相関空間で描写する:有効情報行列

[編集]

当理論によると、複合体の意識の量は、それの要素の間で統合される情報の量によって決まり、そしてその意識の質は、要素間を因果的につなげる情報的関係によって決まる[12]。つまり複合体内での情報の統合され方は、複合体がどれだけ量の意識を持つかだけでなく、どういった種類の意識を持つかも決めている。より正確に言えば、当理論では複合体を構成している各要素が、抽象的な相関空間(クオリア空間)の次元をそれぞれ構成している考える。空間内の各次元の関係を定義することにより、複合体のそれぞれの要素間での有効情報量の値が、クオリア空間の構造を規定していると考える(簡単にデカルト座標系で言えば、複合体の各要素がクオリア空間上の各座標軸と対応し、そして複合体内の二つの要素間の有効情報量がクオリア空間上で二つの座標軸がなす角度を定義する。脚注のvも参照せよ。)この相関空間は意識体験の質を規定するのに十分なものである。つまり、特定の皮質領域が色の意識体験を起こし、別の領域が視覚的運動を起こすかの理由は、各領域内および各領域と主複合体の残りの部分との情報的関係の違いによって取り扱われる。対照的に、主複合体の外部にある情報的関係(例えば感覚入力など)は、意識の量にも質にも寄与しない。

例として、非常にシンプルな四つの要素からなる線形のシステムについて考えて見よう(下の図2参照)。図中の最上段のふたつの絵 a は、二つの異なるシステムにおける因果的相互作用を示している。左側のシステムは発散型の有向グラフで、番号1の要素が、他の三つの要素に同じ強さで接続されている。複合体を分析すれば、このシステムは単一の複合体からなり、そのΦ値は10ビットだということが分る。これに対し右側のシステムは鎖のようになっており、番号1の要素は2に、そして2から3、3から4に接続されている。このシステムもΦ値が10ビットの単一の複合体からなっている。図中の中段のふたつの絵 b は、両方の複合体が持つ有効情報行列を示している。この行列には、複合体内の全ての部分集合についての、部分集合と残りの部分集合(補集合)との間の有効情報量の値が含まれている。これは要素間のすべての情報的関係に相当する(最初の行はある向きでの値で、二行目がそれと逆の向きでの値である)。複合体が持つクオリア空間の次元は、その要素数で定義される。この図の場合、クオリア空間の次元は 4である。そして有効情報行列がクオリア空間の関係的構造を決める。これは位相の一種だと考えることが出来る。有効情報行列上の各値は、それらの次元がどれほど近いものかを表現してると考えられる(脚注 vi も参照せよ)。図中の二つの複合体は、その次元の数もΦの値も等しいにもかかわらず、空間を規定する情報的関係は異なっている。例えば発散型の複合体の方は、より多くのゼロ値を持つ、また鎖型複合体では他の非ゼロ値の部分より二倍の大きさの値を持つ部分がある(部分集合{1, 3}から部分集合{2, 4}への有効情報量)。

図2
図2 同じΦ値をもつ二つの複合体の有効情報行列と活性化状態。
a. 因果的相互作用の描写と複合体の分析。ふたつのシステムが示されているが、一方は発散型の構造を持ち(左)、もう一方は鎖型の構造を持つ(右)。複合体の分析を行うと、両者ともΦ値10ビットで要素数4の複合体を含んでいることが分る。
b. 有効情報行列。a の二つの複合体の有効情報行列を示している。それぞれの複合体について、すべてのニ分割が各列に示されている。上に書かれているのリストが部分集合Aに対 応し、下に書かれているリストが部分集合Bに対応している。そしてそれぞれのニ分割に対し、AからBへの有効情報量が上段に、BからAへの有効が情報量が 下段に、色で示されている。黒はゼロ、赤は中間の値、そして黄色が大きい値、をそれぞれ表している。二つの複合体のΦ値は同じだが、有効情報行列は異なる 点に注意せよ。有効情報行列は、それぞれの複合体の情報的関係の集合またはクオリア空間を定義する。注意点として、有効情報行列はメイン複合体の内部にあ る要素間の情報的関係のみを表している(図中で空っぽの白丸で表されているメイン複合体の外部、との関係は、クオリア空間には効いてこない)。
c. 状態図。二つの複合体について、五つの代表的な状態を示していある。各複合体の四つの要素の活性化状態を縦に並べて示している(青が活性状態、黒が不活性状態を表す)。これら五つの状態は、例えば自発的に時間発展しているとか、または環境からの入力に応じて遷移していっているとかいう風に見なすことが出来る。二つの複合体で活性化状態は同一だが、有効情報行列に違いがあるため、その意味は異なる。ここで左右それぞれの図中の右四列の状態は特別な状態で、一時点でただひとつの要素だけが活性化している。こうした状態は、もしそれが実現可能ならば、特定の複合体の中の特定の要素からの寄与による単一の「クオリア(quale)」と非常に近い形で対応すると考えられる。

この二つの例は、複合体内部での情報の関係の空間が有効情報行列によってどのように捉えられるか、そして同じ次元と統合情報量を持つ二つの複合体のもつ空間がどのように異なるか、を描き出している。当然、私達の意識の元となっているような、高いΦ値をもつ複合体においては、クオリア空間はとてつもなく大きく、そして複雑な構造をしている。しかしながら当理論の中心的な主張は、そうであっても現象学的関係の構造は情報的関係を直接反映していなければならない、というものである。例えば赤と青の意識経験は還元不可能なものとして現われる(赤は単に、より青くない、という何かではない)。よってこれらはクオリア空間において二つの異なる次元に対応していなければならない(つまり複合体内の異なる要素)。また私達は、赤と青の間にある違いというのは、それらの色がトランペットの響き渡る音との間に対してもつ違いと比べると、より小さいものであることを知っている。 よってそれぞれの次元のもたらしているニューロン集団の間の EI の値は、そうした関係を反映したものになっていなければならない。つまり視覚の要素の間のEIの値は、視覚と聴覚の領域の要素の間の EI の値より大きくなければならない。この理論は異なるモダリティーとサブモダリティーの持つ特定の質に従い、対応する皮質領域の内部および複合体の残りの部分との間での情報的関係を予測する。例えば地図的な構造と、「勝者総取り」型の配置とは異なった種類の経験に寄与しているに違いない。また別の予想として、認知上の学習によってもたらされる質と感覚のレパートリーの変化は、メイン複合体に属する対応する皮質内部および皮質間での情報的関係の再構築に対応しているだろう。 また当理論は、複合体外部の情報的関係も予想する。たとえば感覚入力は、その複合体の意識経験の質に直接関わってはこない。もちろん感覚入力や感覚器官、そして自然と外的刺激の統計的性質などはメイン複合体の情報的関係が形成される上で基本的な役割を果たすが、しかしそうした役割は進化や成長、学習を通して効いてくる間接的で歴史的なものである[13](脚注 vii も見よ)。

各意識体験を特定する:相互作用変数の状態

[編集]

当理論に従えば、複合体が持つ意識体験の量と質を一度特定できてしまえば、あとは各時点での要素の活動状態さえ得られれば任意の時点の意識体験の状態を決定することができる(デカルト座標の例えで言うと、各要素がクオリア空間の各座標軸に対応し、各要素間の有効情報量がクオリア空間で各座標軸がなす角度に相当する。この二つで空間の構造が決定され、その後、各要素の活動状態が、各座標軸上での位置(つまり座標)を決定する。よって瞬間瞬間の意識体験は、すべての次元において座標軸上の位置を指定することで決まる)。

The relevant activity variables are those that mediate the informational relationships among the elements, that is, those that mediate effective information. For example, if the elements are local groups of neurons, then the relevant variables are their firing patterns over tens to hundreds of milliseconds.

各時点での複合体の状態は図2.cのような状態図で模式的に表現される(図2.c左側が発散型の複合体の状態図、右側が鎖型の複合体の状態図である)。状態図の各列が複合体内の要素の活性値である(この図では 0 か 1)。そして様々な意識状態は、複合体の要素全体に広がる様々な活性化パターンと対応している(この時、複合体外部の要素は関わってこない)。つまりそれぞれの意識状態というのは、有効情報行列で指定される高次元のクオリア空間の一点として捉えることが出来るということである(脚注 viiiを見よ)。ゆえに継続する意識状態、または意識の流れは、クオリア空間における軌道(トラジェクトリ)として捉えることが出来る。 The state diagram also illustrates some states that have particular significance (second to fifth column). These are the states with just one active element, and all other elements silent (or active at some baseline level). It is not clear whether such highly selective states can be achieved within a large neural complex of high Φ, such as that one that is postulated to underlie human consciousness. To the extent that this is possible, such highly selective states would represent the closest approximation to experiencing that element's specific contribution to consciousness – its quality or "quale". However, because of the differences in the qualia space between the two complexes, the same state over the four elements would correspond to different experiences (and mean different things) for the two complexes. またこの点も強調しておかなければならないだろうが、意識の状態を規定するのは複合体中の全ての要素の活性化状態であり、活性している要素も不活性な要素も、どちらも勘定に入っている。

To recapitulate, the theory claims that the quality of consciousness associated with a complex is determined by its effective information matrix. The effective information matrix specifies all informational relationships among the elements of a complex. The values of the variables mediating informational interactions among the elements of a complex specify the particular conscious experience at any given time.

仮説をテストする

[編集]

意識と情報統合量と脳

[編集]

Based on a phenomenological analysis, we have argued that consciousness corresponds to the capacity to integrate information. We have then considered how such capacity can be measured, and we have developed a theoretical framework for consciousness as information integration. We will now consider several neuroanatomical or neurophysiological factors that are known to influence consciousness. After briefly discussing the empirical evidence, we will use simplified computer models to illustrate how these neuroanatomical and neurophysiological factors influence information integration. As we shall see, the information integration theory not only fits empirical observations reasonably well, but offers a principled explanation for them.

Consciousness is generated by a distributed thalamocortical network that is at once specialized and integrated

Ancient Greek philosophers disputed whether the seat of consciousness was in the lungs, in the heart, or in the brain. The brain's pre-eminence is now undisputed, and scientists are trying to establish which specific parts of the brain are important. For example, it is well established that the spinal cord is not essential for our conscious experience, as paraplegic individuals with high spinal transactions are fully conscious. Conversely, a well-functioning thalamocortical system is essential for consciousness [15]. Opinions differ, however, about the contribution of certain cortical areas [1,16-21]. Studies of comatose or vegetative patients indicate that a global loss of consciousness is usually caused by lesions that impair multiple sectors of the thalamocortical system, or at least their ability to work together as a system. [22-24]. By contrast, selective lesions of individual thalamocortical areas impair different submodalities of conscious experience, such as the perception of color or of faces [25]. Electrophysiological and imaging studies also indicate that neural activity that correlates with conscious experience is widely distributed over the cortex (e.g [20,26-29]). It would seem, therefore, that the neural substrate of consciousness is a distributed thalamocortical network, and that there is no single cortical area where it all comes together (see Appendix, ix).

The fact that consciousness as we know it is generated by the thalamocortical system fits well with the information integration theory, since what we know about its organization appears ideally suited to the integration of information. On the information side, the thalamocortical system comprises a large number of elements that are functionally specialized, becoming activated in different circumstances. [12,30]. Thus, the cerebral cortex is subdivided into systems dealing with different functions, such as vision, audition, motor control, planning, and many others. Each system in turn is subdivided into specialized areas, for example different visual areas are activated by shape, color, and motion. Within an area, different groups of neurons are further specialized, e.g. by responding to different directions of motion. On the integration side, the specialized elements of the thalamocortical system are linked by an extended network of intra- and inter-areal connections that permit rapid and effective interactions within and between areas [31-35]. In this way, thalamocortical neuronal groups are kept ready to respond, at multiple spatial and temporal scales, to activity changes in nearby and distant thalamocortical areas. As suggested by the regular finding of neurons showing multimodal responses that change depending on the context [36,37], the capacity of the thalamocortical system to integrate information is probably greatly enhanced by nonlinear switching mechanisms, such as gain modulation or synchronization, that can modify mappings between brain areas dynamically [34,38-40]. In summary, the thalamocortical system is organized in a way that appears to emphasize at once both functional specialization and functional integration.

As shown by computer simulations, systems of neural elements whose connectivity jointly satisfies the requirements for functional specialization and for functional integration are well suited to integrating information. Fig. 3a shows a representative connection matrix obtained by optimizing for Φ starting from random connection weights. A graph-theoretical analysis indicates that connection matrices yielding the highest values of information integration (Φ = 74 bits) share two key characteristics [8]. First, connection patterns are different for different elements, ensuring functional specialization. Second, all elements can be reached from all other elements of the network, ensuring functional integration. Thus, simulated systems having maximum Φ appear to require both functional specialization and functional integration. In fact, if functional specialization is lost by replacing the heterogeneous connectivity with a homogeneous one, or if functional integration is lost by rearranging the connections to form small modules, the value of Φ decreases considerably (Fig 3b,3c). Further simulations show that it is possible to construct a large complex of high Φ by joining smaller complexes through reciprocal connections [8]. In the thalamocortical system, reciprocal connections linking topographically organized areas may be especially effective with respect to information integration. In summary, the coexistence of functional specialization and functional integration, epitomized by the thalamocortical system [30], is associated with high values of Φ.

図3

Figure 3. Information integration for a thalamocortical-like architecture. a. Optimization of information integration for a system that is both functionally specialized and functionally integrated. Shown is the causal interaction diagram for a network whose connection matrix was obtained by optimization for Φ (Φ = 74 bits). Note the heterogeneous arrangement of the incoming and outgoing connections: each element is connected to a different subset of elements, with different weights. Further analysis indicates that this network jointly maximizes functional specialization and functional integration among its 8 elements, thereby resembling the anatomical organization of the thalamocortical system [8]. b. Reduction of information integration through loss of specialization. The same amount of connectivity, distributed homogeneously to eliminate functional specialization, yields a complex with much lower values of Φ (Φ = 20 bits). c. Reduction of information integration through loss of integration. The same amount of connectivity, distributed in such a way as to form four independent modules to eliminate functional integration, yields four separate complexes with much lower values of Φ (Φ = 20 bits).

Other brain regions with comparable numbers of neurons, such as the cerebellum, do not contribute to conscious experience

Consider now the cerebellum. This brain region contains more neurons than the cerebral cortex, has huge numbers of synapses, and receives mapped inputs from the environment and controls several outputs. However, in striking contrast to the thalamocortical system, lesions or ablations indicate that the direct contribution of the cerebellum to conscious experience is minimal. Why is this the case?

According to the theory, the reason lies with the organization of cerebellar connections, which is radically different from that of the thalamocortical system and is not well suited to information integration. Specifically, the organization of the connections is such that individual patches of cerebellar cortex tend to be activated independently of one another, with little interaction possible between distant patches [41,42]. This suggests that cerebellar connections may not be organized so as to generate a large complex of high Φ, but rather to give rise to many small complexes each with a low value of Φ. Such an organization seems to be highly suited for both the learning and the rapid, effortless execution of informationally insulated subroutines.

This concept is illustrated in Fig. 4a, which shows a strongly modular network, consisting of three modules of eight strongly interconnected elements each. This network yields Φ = 20 bits for each of its three modules, which form the system's three complexes. This example indicates that, irrespective of how many elements and connections are present in a neural structure, if that structure is organized in a strongly modular manner with little interactions among modules, complex size and Φ values are necessarily low. According to the information integration theory, this is the reason why these systems, although computationally very sophisticated, contribute little to consciousness. It is also the reason why there is no conscious experience associated with hypothalamic and brainstem circuits that regulate important physiological variables, such as blood pressure.

図4

Figure 4. Information integration and complexes for other neural-like architectures. a. Schematic of a cerebellum-like organization. Shown are three modules of eight elements each, with many feed forward and lateral connections within each module but minimal connections among them. The analysis of complexes reveals three separate complexes with low values of Φ (Φ = 20 bits). There is also a large complex encompassing all the elements, but its Φ value is extremely low (Φ = 5 bits). b. Schematic of the organization of a reticular activating system. Shown is a single subcortical "reticular" element providing common input to the eight elements of a thalamocortical-like main complex (both specialized and integrated, Φ = 61 bits). Despite the diffuse projections from the reticular element on the main complex, the complex comprising all 9 elements has a much lower value of Φ (Φ = 10 bits). c. Schematic of the organization of afferent pathways. Shown are three short chains that stand for afferent pathways. Each chain connects to a port-in of a main complex having a high value of Φ (61 bits) that is thalamocortical-like (both specialized and integrated). Note that the afferent pathways and the elements of the main complex together constitute a large complex, but its Φ value is low (Φ = 10 bits). Thus, elements in afferent pathways can affect the main complex without belonging to it. d. Schematic of the organization of efferent pathways. Shown are three short chains that stand for efferent pathways. Each chain receives a connection from a port-out of the thalamocortical-like main complex. Also in this case, the efferent pathways and the elements of the main complex together constitute a large complex, but its Φ value is low (Φ = 10 bits). e. Schematic of the organization of cortico-subcortico-cortical loops. Shown are three short chains that stand for cortico-subcortico-cortical loops, which are connected to the main complex at both ports-in and ports-out. Again, the subcortical loops and the elements of the main complex together constitute a large complex, but its Φ value is low (Φ = 10 bits). Thus, elements in loops connected to the main complex can affect it without belonging to it. Note, however, that the addition of these three loops slightly increased the Φ value of the main complex (from Φ = 61 to Φ = 63 bits) by providing additional pathways for interactions among its elements.

Subcortical centers can control consciousness by modulating the readiness of the thalamocortical system without contributing directly to it

It has been known for a long time that lesions in the reticular formation of the brainstem can produce unconsciousness and coma. Conversely, stimulating the reticular formation can arouse a comatose animal and activate the thalamocortical system, making it ready to respond to stimuli [43]. Groups of neurons within the reticular formation are characterized by diffuse projections to many areas of the brain. Many such groups release neuromodulators such as acetylcholine, histamine, noradrenaline, serotonin, dopamine, and glutamate (acting on metabotropic receptors) and can have extremely widespread effects on both neural excitability and plasticity [44]. However, it would seem that the reticular formation, while necessary for the normal functioning of the thalamocortical system and therefore for the occurrence of conscious experience, may not contribute much in terms of specific dimensions of consciousness – it may work mostly like an external on-switch or as a transient booster of thalamocortical firing.

Such a role can be explained readily in terms of information integration. As shown in Fig. 4b, neural elements that have widespread and effective connections to a main complex of high Φ may nevertheless remain informationally excluded from it. Instead, they are part of a larger complex having a much lower value of Φ.

Neural activity in sensory afferents to the thalamocortical system can determine what we experience without contributing directly to it

What we see usually depends on the activity patterns that occur in the retina and that are relayed to the brain. However, many observations suggest that retinal activity does not contribute directly to conscious experience. Retinal cells surely can tell light from dark and convey that information to visual cortex, but their rapidly shifting firing patterns do not correspond well with what we perceive. For example, during blinks and eye movements retinal activity changes dramatically, but visual perception does not. The retina has a blind spot at the exit of the optic nerve where there are no photoreceptors, and it has low spatial resolution and no color sensitivity at the periphery of the visual field, but we are not aware of any of this. More importantly, lesioning the retina does not prevent conscious visual experiences. For example, a person who becomes retinally blind as an adult continues to have vivid visual images and dreams. Conversely, stimulating the retina during sleep by keeping the eyes open and presenting various visual inputs does not yield any visual experience and does not affect visual dreams. Why is it that retinal activity usually determines what we see through its action on thalamocortical circuits, but does not contribute directly to conscious experience?

As shown in Fig. 4c, adding or removing multiple, segregated incoming pathways does not change the composition of the main complex, and causes little change in its Φ. While the incoming pathways do participate in a larger complex together with the elements of the main complex, the Φ value of this larger complex is very low, being limited by the effective information between each afferent pathway and its port in at the main complex. Thus, input pathways providing powerful inputs to a complex add nothing to the information it integrates if their effects are entirely accounted for by ports-in.

Neural activity in motor efferents from the thalamocortical system, while producing varied behavioral outputs, does not contribute directly to conscious experience

In neurological practice, as well as in everyday life, we tend to associate consciousness with the presence of a diverse behavioral repertoire. For example, if we ask a lot of different questions and for each of them we obtain an appropriate answer, we generally infer that a person is conscious. Such a criterion is not unreasonable in terms of information integration, given that a wide behavioral repertoire is usually indicative of a large repertoire of internal states that are available to an integrated system. However, it appears that neural activity in motor pathways, which is necessary to bring about such diverse behavioral responses, does not in itself contribute to consciousness. For example, patients with the locked-in syndrome, who are completely paralyzed except for the ability to gaze upwards, are fully conscious. Similarly, while we are completely paralyzed during dreams, consciousness is not impaired by the absence of behavior. Even lesions of central motor areas do not impair consciousness.

Why is it that neurons in motor pathways, which can produce a large repertoire of different outputs and thereby relay a large amount of information about different conscious states, do not contribute directly to consciousness? As shown in Fig. 4d, adding or removing multiple, segregated outgoing pathways to a main complex does not change the composition of the main complex, and does not change its Φ value. Like incoming pathways, outgoing pathways do participate in a larger complex together with the elements of the main complex, but the Φ value of this larger complex is very low, being limited by the effective information between each port-out of the main complex and its effector targets.

Neural processes in cortico-subcortico-cortical loops, while important in the production and sequencing of action, thought, and language, do not contribute directly to conscious experience

Another set of neural structures that may not contribute directly to conscious experience are subcortical structures such as the basal ganglia. The basal ganglia are large nuclei that contain many circuits arranged in parallel, some implicated in motor and oculomotor control, others, such as the dorsolateral prefrontal circuit, in cognitive functions, and others, such as the lateral orbitofrontal and anterior cingulate circuits, in social behavior, motivation, and emotion [45]. Each basal ganglia circuit originates in layer V of the cortex, and through a last step in the thalamus, returns to the cortex, not far from where the circuit started [46]. Similarly arranged cortico-ponto-cerebello-thalamo-cortical loops also exist. Why is it that these complicated neural structures, which are tightly connected to the thalamocortical system at both ends, do not seem to provide much direct contribution to conscious experience? (see Appendix, x)

As shown in Fig. 4e, the addition of many parallel cycles also generally does not change the composition of the main complex, although Φ values can be altered (see Appendix, xi). Instead, the elements of the main complex and of the connected cycles form a joint complex that can only integrate the limited amount of information exchanged within each cycle. Thus, subcortical cycles or loops implement specialized subroutines that are capable of influencing the states of the main thalamocortical complex without joining it. Such informationally insulated cortico-subcortical loops could constitute the neural substrates for many unconscious processes that can affect and be affected by conscious experience [3,47]. It is likely that new informationally insulated loops can be created through learning and repetition. For example, when first performing a new task, we are conscious of every detail of it, we make mistakes, are slow, and must make an effort. When we have learned the task well, we perform it better, faster, and with less effort, but we are also less aware of it. As suggested by imaging results, a large number of neocortical regions are involved when we first perform a task. With practice, activation is reduced or shifts to different circuits [48]. According to the theory, during the early trials, performing the task involves many regions of the main complex, while later certain aspects of the task are delegated to neural circuits, including subcortical ones, that are informationally insulated.

Many neural processes within the thalamocortical system may also influence conscious experience without contributing directly to it

Even within the thalamocortical system proper, a substantial proportion of neural activity does not appear to contribute directly to conscious experience. For example, what we see and hear requires elaborate computational processes dealing with figure-ground segregation, depth perception, object recognition, and language parsing, many of which take place in the thalamocortical system. Yet we are not aware of all this diligent buzzing: we just see objects, separated from the background and laid out in space, and know what they are, or hear words, nicely separated from each other, and know what they mean. As an example, take binocular rivalry, where the two eyes view two different images, but we perceive consciously just one image at a time, alternating in sequence. Recordings in monkeys have shown that the activity of visual neurons in certain cortical areas, such as the inferotemporal cortex, follows faithfully what the subject perceives consciously. However, in other areas, such as primary visual cortex, there are many neurons that respond to the stimulus presented to the eye, whether or not the subject is perceiving it [49]. Neuromagnetic studies in humans have shown that neural activity correlated with a stimulus that is not being consciously perceived can be recorded in many cortical areas, including the front of the brain. [26]. Why does the firing of many cortical neurons carrying out the computational processes that enable object recognition (or language parsing) not correspond to anything conscious?

The situation is similar on the executive side of consciousness. When we plan to do or say something, we are vaguely conscious of what we intend, and presumably these intentions are reflected in specific firing patterns of certain neuronal groups. Our vague intentions are then translated almost miraculously into the right words, and strung together to form a syntactically correct sentence that conveys what we meant to say. And yet again, we are not at all conscious of the complicated processing that is needed to carry out our intentions, much of which takes place in the cortex. What determines whether the firing of neurons within the thalamocortical system contributes directly to consciousness or not? According to the information integration theory, the same considerations that apply to input and output circuits and to cortico-subcortico-cortical loops also apply to circuits and loops contained entirely within the thalamocortical system. Thus, the theory predicts that activity within certain cortical circuits does not contribute to consciousness because such circuits implement informationally insulated loops that remain outside of the main thalamocortical complex. At this stage, however, it is hard to say precisely which cortical circuits may be informationally insulated. Are primary sensory cortices organized like massive afferent pathways to a main complex "higher up" in the cortical hierarchy? Is much of prefrontal cortex organized like a massive efferent pathway? Do certain cortical areas, such as those belonging to the dorsal visual stream, remain partly segregated from the main complex? Do interactions within a cortico-thalamic minicolumn qualify as intrinsic mini-loops that support the main complex without being part of it? Unfortunately, answering these questions and properly testing the predictions of the theory requires a much better understanding of cortical neuroanatomy than is presently available [50,51].

Consciousness can be split if the thalamocortical system is split

Studies of split-brain patients, whose corpus callosum was sectioned for therapeutic reasons, show that each hemisphere has its own, private conscious experience. The dominant, linguistically competent hemisphere does not seem to suffer a major impairment of consciousness after the operation. The non-dominant hemisphere, although it loses some important abilities and its residual capacities are harder to assess, also appears to be conscious. [5]. Some information, e.g. emotional arousal, seems to be shared across the hemispheres, probably thanks to subcortical common inputs.

Viewing consciousness as information integration suggests straightforward explanations for these puzzling observations. Consider the simplified model in Fig. 5a. A main complex having high Φ includes two sets of elements ("hemispheres") having similar internal architecture that are joined by "callosal" connections (top panel). When the callosal connections are cut (bottom panel), the single main complex splits and is replaced by two smaller complexes corresponding to the two hemispheres. There is also a complex, of much lower Φ, which includes both hemispheres and a "subcortical" element that provide them with common input. Thus, there is a sense in which the two hemispheres still form an integrated entity, but the information they share is minimal (see Appendix, xii).

図5

Figure 5. Information integration and complexes after anatomical and functional disconnections. a. Schematic of a split-brain-like anatomical disconnection. Top. Shown is a large main complex obtained by connecting two thalamocortical-like subsets through "callosum-like" reciprocal connections. There is also a single element that projects to all other elements, representing "subcortical" common input. Note that the Φ value for the main complex (16 elements) is high (Φ = 72 bits). There is also a larger complex including the "subcortical" element, but its Φ value is low (Φ = 10). Bottom. If the "callosum-like" connections are cut, one obtains two 8-element complexes, corresponding to the two "hemispheres", whose Φ value is reduced but still high (Φ = 61 bits). The two "hemispheres" still share some information due to common input from the "subcortical" element with which they form a large complex of low Φ. b. Schematic of a functional disconnection. Top. Shown is a large main complex obtained by linking with reciprocal connections a "supramodal" module of four elements (cornerstone) with a "visual" module (to its right) and an "auditory" module (below). Note that there are no direct connections between the "visual" and "auditory" modules. The 12 elements together form a main complex with Φ = 61 bits. Bottom. If the "auditory" module is functionally disconnected from the "supramodal" one by inactivating its four elements (indicated in blue), the main complex shrinks to include just the "supramodal" and "visual" modules. In this case, the Φ value is only minimally reduced (Φ = 57 bits).

Some parts of the thalamocortical system may contribute to conscious experience at one time and not at another

Until now, we have considered structural aspects of the organization of the nervous system that, according to the information integration theory, explain why certain parts of the brain contribute directly to consciousness and others do not, or much less so. In addition to neuroanatomical factors, neurophysiological factors are also important in determining to what extent a given neural structure can integrate information. For example, anatomical connections between brain regions may or may not be functional, depending on both pathological or physiological factors. Functional disconnections between certain parts of the brain and others are thought to play a role in psychiatric conversion and dissociative disorders, may occur during dreaming, and may be implicated in conditions such as hypnosis. Thus, functional disconnections, just like anatomical disconnections, may lead to a restriction of the neural substrate of consciousness.

It is also likely that certain attentional phenomena may correspond to changes in the neural substrate of consciousness. For example, when one is absorbed in thought, or focused exclusively on a given sensory modality, such as vision, the neural substrate of consciousness may not be the same as when we are diffusely monitoring the environment. Phenomena such as the attentional blink, where a fixed sensory input may at times make it to consciousness and at times not, may also be due to changes in functional connectivity: access to the main thalamocortical complex may be enabled or not based on dynamics intrinsic to the complex [52]. Phenomena such as binocular rivalry may also be related, at least in part, to dynamic changes in the composition of the main thalamocortical complex caused by transient changes in functional connectivity [53]. At present, however, it is still not easy to determine whether a particular group of neurons is excluded from the main complex because of hard-wired anatomical constraints, or is transiently disconnected due to functional changes.

Figure 5b (top panel) shows a simple model obtained by taking three subsets of elements of (relatively) high Φ and connecting them through reciprocal connections. Specifically, the first subset, which stands for supramodal areas of the brain, is reciprocally connected to the second and third subsets, which stand for visual and auditory areas, respectively. In this idealized example, the visual and auditory subsets are not connected directly among themselves. As one can see, the three subsets thus connected form a single main complex having a Φ value of 61 bits. In the bottom panel, the auditory subset has been disconnected, in a functional sense, by mimicking a profound deactivation of its elements. The result is that the main complex shrinks and the auditory subset ends up outside the main complex. Note, however, that in this particular case the value of Φ changes very little (57 bits), indicating that it might be possible for the borders of the main complex to change dynamically while the amount of consciousness is not substantially altered. What would change, of course, would be the configuration of the space of informational relationships. These simulations suggest that attentional mechanisms may work both by changing neuronal firing rates, and therefore saliency within qualia space, as well as by modifying neuronal readiness to fire, and therefore the boundaries of the main complex and of qualia space itself. This is why the set of elements underlying consciousness is not static, but can be considered to form a "dynamic complex" or "dynamic core" [1,9].

Depending on certain neurophysiological parameters, the same thalamocortical network can generate much or little conscious experience

Another example of the importance of neurophysiological parameters is provided by sleep – the most familiar of the alterations of consciousness, and yet one of the most striking. Upon awakening from dreamless sleep, we have the peculiar impression that for a while we were not there at all nor, as far as we are concerned, was the rest of the world. This everyday observation tells us vividly that consciousness can come and go, grow and shrink. Indeed, if we did not sleep, it might be hard to imagine that consciousness is not a given, but depends somehow on the way our brain is functioning. The loss of consciousness between falling asleep and waking up is relative, rather than absolute. [54]. Thus, careful studies of mental activity reported immediately after awakening have shown that some degree of consciousness is maintained during much of sleep. Many awakenings, especially from rapid eye movement (REM) sleep, yield dream reports, and dreams can be at times as vivid and intensely conscious as waking experiences. Dream-like consciousness also occurs during various phases of slow wave sleep, especially at sleep onset and during the last part of the night. Nevertheless, a certain proportion of awakenings do not yield any dream report, suggesting a marked reduction of consciousness. Such "empty" awakenings typically occur during the deepest stages of slow wave sleep (stages 3 and 4), especially during the first half of the night.

Which neurophysiological parameters are responsible for the remarkable changes in the quantity and quality of conscious experience that occur during sleep? We know for certain that the brain does not simply shut off during sleep. During REM sleep, for example, neural activity is as high, if not higher, than during wakefulness, and EEG recordings show low-voltage fast-activity. This EEG pattern is known as "activated" because cortical neurons, being steadily depolarized and close to their firing threshold, are ready to respond to incoming inputs. Given these similarities, it is perhaps not surprising that consciousness should be present during both states. Changes in the quality of consciousness, however, do occur, and they correspond closely to relative changes in the activation of different brain areas. [54].

During slow wave sleep, average firing rates of cortical neurons are also similar to those observed during quiet wakefulness. However, due to changes in the level of certain neuromodulators, virtually all cortical neurons engage in slow oscillations at around 1 Hz, which are reflected in slow waves in the EEG [55]. Slow oscillations consist of a depolarized phase, during which the membrane potential of cortical neurons is close to firing threshold and spontaneous firing rates are similar to quiet wakefulness, and of a hyperpolarized phase, during which neurons become silent and are further away from firing threshold. From the perspective of information integration, a reduction in the readiness to respond to stimuli during the hyperpolarization phase of the slow oscillation would imply a reduction of consciousness. It would be as if we were watching very short fragments of a movie interspersed with repeated unconscious "blanks" in which we cannot see, think, or remember anything, and therefore have little to report. A similar kind of unreadiness to respond, associated with profound hyperpolarization, is found in deep anesthesia, another condition where consciousness is impaired. Studies using transcranial magnetic stimulation in conjunction with high-density EEG are currently testing how response readiness changes during the sleep waking cycle.

From the perspective of information integration, a reduction of consciousness during certain phases of sleep would occur even if the brain remained capable of responding to perturbations, provided its response were to lack differentiation. This prediction is borne out by detailed computer models of a portion of the visual thalamocortical system (Hill and Tononi, in preparation). According to these simulations, in the waking mode different perturbations of the thalamocortical network yield specific responses. In the sleep mode, instead, the network becomes bistable: specific effects of different perturbations are quickly washed out and their propagation impeded: the whole network transitions into the depolarized or into the hyperpolarized phase of the slow oscillation – a stereotypic response that is observed irrespective of the particular perturbation (see Appendix, xiii). And of course, this bistability is also evident in the spontaneous behavior of the network: during each slow oscillation, cortical neurons are either all firing or all silent, with little freedom in between. In summary, these simulations indicate that, even if the anatomical connectivity of a complex stays the same, a change in key parameters governing the readiness of neurons to respond and the differentiation of their responses may alter radically the Φ value of the complex, with corresponding consequences on consciousness.

Conscious experience and time requirements

Consciousness not only requires a neural substrate with appropriate anatomical structure and appropriate physiological parameters: it also needs time. As was mentioned earlier, studies of how a percept is progressively specified and stabilized indicate that it takes up to 100–200 milliseconds to develop a fully formed sensory experience, and that the surfacing of a conscious thought may take even longer. Experiments in which the somatosensory areas of the cerebral cortex were stimulated directly indicate that low intensity stimuli must be sustained for up to 500 milliseconds to produce a conscious sensation [56]. Multi-unit recordings in the primary visual cortex of monkeys show that, after a stimulus is presented, the firing rate of many neurons increases irrespective of whether the animal reports seeing a figure or not. After 80–100 milliseconds, however, their discharge accurately predicts the conscious detection of the figure. Thus, the firing of the same cortical neurons may correlate with consciousness at certain times, but not at other times [57]. What determines when the firing of the same cortical neurons contributes to conscious experience and when it does not? And why may it take up to hundreds of milliseconds before a conscious experience is generated?

The theory predicts that the time requirements for the generation of conscious experience in the brain emerge directly from the time requirements for the build-up of effective interactions among the elements of the main complex. As was mentioned above, if one were to perturb half of the elements of the main complex for less than a millisecond, no perturbations would produce any effect on the other half within this time window, and Φ would be equal to zero. After say 100 milliseconds, however, there is enough time for differential effects to be manifested, and Φ should grow. This prediction is confirmed by results obtained using large-scale computer simulations of the thalamocortical system, where the time course of causal interactions and functional integration can be studied in detail [38,58,59], Hill and Tononi, unpublished results). For example, in a model including nine functionally segregated visual areas, the time it takes for functionally specialized neurons located in several different areas to interact constructively and produce a specific, correlated firing pattern is at least 80 milliseconds [38]. These correlated firing patterns last for several hundred milliseconds. After one or more seconds, however, the network settles into spontaneous activity states that are largely independent of previous perturbations. Thus, the characteristic time scale for maximally differentiated responses in thalamocortical networks appears to be comprised between a few tens of milliseconds and a few seconds at the most.

In summary, the time scale of neurophysiological interactions needed to integrate information among distant cortical regions appears to be consistent with that required by psychophysical observations (microgenesis), by stimulation experiments, and by recording experiments.

まとめ:青を見る

[編集]

上のいくつかの例でも示されるように、情報統合理論は、意識の神経基盤に関するいくつかの実験結果とも整合的である。さらに、なぜ意識が脳の特定の部分、特定の機能の広域的状態と結びついていて、他の部分・状態とではないのか、という疑問にたいして、当理論は原理的な説明を行える。当理論の中心となる主張を再度確認するため、最初に示した思考実験をもう一度振り返って見よう。

Imagine again that you are comfortably facing a blank screen that is alternately on and off. When the screen turns on, you see a homogenous blue field, indeed for the sake of the argument we assume that you are having a "pure" perception of blue, unencumbered by extraneous percepts or thoughts (perhaps as can be achieved in certain meditative states). As you have been instructed, you signal your perception of blue by pushing a button. Now consider an extremely simplified scenario of the neural events that might accompany your seeing blue. When the screen turns on, a volley of activity propagates through the visual afferent pathways, involving successive stages such as retinal short wavelength cones, blue-yellow opponents cells, color constant cells, and so on. Eventually, this volley of activity in the visual afferent pathways leads to the firing of some neuronal groups in color-selective areas of the temporal lobe that, on empirical grounds, are our best bet for the neural correlate of blue: i) their activity correlates well with your perception of blue whether you see, imagine, or dream blue, in a way that is as stable and as susceptible to illusions as your perception of blue; ii) their microstimulation leads to the perception of blue; and iii) their selective lesion makes you unable to perceive blue. Let us assume, then, that these neuronal groups quickly increase their firing, and within a few tens of milliseconds they reach and then maintain increased levels of firing (see Appendix, xiv). We also assume that, at the same time, neuronal groups in neighboring cortical areas go on firing at a baseline level, largely unaffected by the blue light. These include neuronal groups in other visual areas that are selective for shape or movement; neuronal groups in auditory area that are selective for tones; and many others. On the other hand, the volley of activity originating in the retina does not exhaust itself by generating sustained activity in the color areas of the temporal lobe. Part of the volley proceeds at great speed and activates efferent motor pathways, which cause you to push the signaling button. Another part activates cortico-subcortico-cortical loops in your prefrontal cortex and basal ganglia, which almost make you speak the word "blue" aloud. In the meantime, many other parts of the brain are buzzing along, unaffected by what is going on in the visual system: cerebellar circuits are actively stabilizing your posture and gaze, and hypothalamic-brainstem circuits are actively stabilizing your blood pressure. What components in this simplified neural scenario are essential for your conscious experience of blue, and why?

The information integration theory makes several claims that lead to associated predictions. A first claim is that the neural substrate of consciousness as we know it is a complex of high Φ that is capable of integrating a large amount of information – the main complex. Therefore, whether a group of neurons contributes directly to consciousness is a function of its belonging to the main complex or not. In this example, the theory would predict that blue-selective neurons in some high-level color area should be inside the main complex; on the other hand, blue-sensitive neurons in afferent visual pathways, neurons in efferent pathways mediating the button-pressing response, neurons in cortico-subcortico-cortical and intracortical loops mediating subvocalization of the word "blue", neurons in the cerebellum controlling posture and neurons in hypothalamic-brainstem circuits controlling blood pressure should be outside. This even though these neurons may be equally active when you see blue, and even though some of them may be connected to elements of the main complex. In principle, joint microstimulation and recording experiments, and to some extent an analysis of patterns of synchronization, could determine participation in the main complex and test this prediction. The theory also predicts that blue-selective neurons in the main complex contribute to the conscious experience of blue only if their activation is sufficiently strong or sustained that they can make a difference, in informational terms, to the rest of the complex. Additional predictions are that, if a group of neurons that is normally part of the main complex becomes informationally disconnected from it, as could occur through attentional effects or in certain phases of sleep, the same group of neurons, firing in exactly the same way, would not contribute to consciousness. Moreover, according to the theory, the other groups of neurons within the main complex are essential to our conscious experience of blue even if, as in this example, they are not activated. This is not difficult to see. Imagine that, starting from an intact main complex, we were to remove one element after another, except for the active, blue-selective one. If an inactive element contributing to "seeing red" were removed, blue would not be experienced as blue anymore, but as some less differentiated color, perhaps not unlike those experienced by certain dichromats. If further elements of the main complex were removed, including those contributing to shapes, to sounds, to thoughts and so forth, one would soon drop to such a low level of consciousness that "seeing blue" would become meaningless: the "feeling" (and meaning) of the quale "blue" would have been eroded down to nothing. Indeed, while the remaining neural circuits may still be able to discriminate blue from other colors, they would do so very much as a photodiode does (see Appendix, xv).

A second claim of the theory is that the quality of consciousness is determined by the informational relationships within the main complex. Therefore, how a group of neurons contributes to consciousness is a function of its informational relationships inside the complex and not outside of it. In this example, blue-selective neurons within the main complex have become blue-selective no doubt thanks to the inputs received from the appropriate afferent pathways, and ultimately because of some aspects of the statistics of the environment and the resulting plastic changes throughout the brain. However, the theory predicts that their present firing contributes the quale "blue" exclusively because of their informational relationships within the main complex. If connections outside the main complex were to be manipulated, including the afferent color pathways, the experience elicited by activating the blue-selective neurons within the complex would stay the same. Conversely, if the relationships inside the main complex were to change, as could be done by changing the pattern of connections within the color-selective area and with the rest of the complex, so would the conscious experience of blue. That is, activating the same neurons would produce a different conscious experience.

仮説の示唆すること

[編集]

まとめとして、意識の情報統合理論から導かれるいくつかの含意に言及する。もっとも一般的なレベルの話として、当理論は存在論的な含みをもつ。当理論は現象学と思考実験の利用から端緒を得て、システムの持つ情報統合の能力として、主観的経験を論じている。この観点から言うと、経験(すなわち情報統合量)は、質量や電荷やエネルギーといったものと同じ、基礎的な量である。ここから、あらゆる物理的システムは、それが何から作られているかによらず、それの持つ情報統合の能力に応じた量の主観的経験を持つことになる。つまり興味深いこととして、高いΦ値を与えることで、意識をもった人工物を作ることができる可能性が示唆される、ということである。さらに言うと、それら人工物の有効情報行列を適切に構成することで、意識の質をコントロールできる可能性がある、ということである。

また他に言えることは、意識が「全部かゼロか」といった性質のものではなく、階層をもって、様々な程度で、自然界の(また人工物の)ほとんどのシステムに存在するだろう、ということである。とはいえ高レベルの経験に対応する高いΦ値を持つ複合体を作るのに必要な条件は明らかに簡単なものではないため、そうした高レベルの経験はおそらくほんのいくつかの種類のシステムに限られているだろう、たとえば機能特化と統合を最大限行うよう適切に設計されている複雑な脳などがそのひとつである。さらなる示唆として、意識は様々な程度で、多様な時間的・空間的スケールで存在するだろう。とはいえ大抵のシステムにおいて、情報統合量が最大になるような特権的な時間的空間的スケールがある。

ここにおいては、意識は 性向や潜在性として描写された。

Consciousness is characterized here as a disposition or potentiality – in this case as the potential differentiation of a system's responses to all possible perturbations, yet it is undeniably actual. Consider another thought experiment: you could be in a coma for days, awaken to consciousness for just one second, and revert to a coma. As long as your thalamocortical system can function well for that one second, you will be conscious. That is, a system does not have to explore its repertoire of states to be conscious, or to know how conscious it is supposed to be: what counts is only that the repertoire is potentially available. While this may sound strange, fundamental quantities associated with physical systems can also be characterized as dispositions or potentialities, yet have actual effects. For example, mass can be characterized as a potentiality – say the resistance that a body would offer to acceleration by a force – yet it exerts undeniable effects, such as attracting other masses. This too has intriguing implications. For example, because in this view consciousness corresponds to the potential of an integrated system to enter a large number of states by way of causal interactions within it, experience is present as long as such potential is present, whether or not the system's elements are activated. Thus, the theory predicts that a brain where no neurons were activated, but were kept ready to respond in a differentiated manner to different perturbations, would be conscious (perhaps that nothing was going on). Also, because consciousness is a property of a system, not of a state, the state the system is in only determines which particular experience becomes actual at any given time, and not whether experience is present. Thus, a brain where each neuron were microstimulated to fire as an exact replica of your brain, but where synaptic interactions had been blocked, would be unconscious.

当理論の予測として意識というのは、自我・言語・感情・身体・環境内への埋め込みといった物事の有無に関わらず、システムが持つ情報統合の能力だけによって決まる。これがいくつかの一般的な常識に反するのは確かだが。この予想はREM睡眠中の意識の保存と整合的である。REM睡眠中は身体への(または身体からの)、出力信号(または入力信号)が極端に弱くなる。自己の感覚や言語、そして感情といったものを媒介している脳の領域を、一時的に不活性化することで、この予測はより説得力ある形で見積もられるだろう。

なんにせよ、

Nevertheless, the theory recognizes that these same factors are important historically because they favor the development of neural circuits forming a main complex of high Φ. For example, the ability of a system to integrate information grows as that system incorporates statistical regularities from its environment and learns [14]. In this sense, the emergence of consciousness in biological systems is predicated on a long evolutionary history, on individual development, and on experience-dependent change in neural connectivity. Indeed, the theory also suggests that consciousness provides an adaptive advantage and may have evolved precisely because it is identical with the ability to integrate a lot of information in a short period of time. If such information is about the environment, the implication is that, the more an animal is conscious, the larger the number of variables it can take into account jointly to guide its behavior.

当理論のもつ別の示唆として、赤ちゃんや動物、失神状態や植物状態にある人、無動性無言症、精神運動発作、夢遊病者といった言語報告を行うことができない場合についても、原則として、意識の有無とその量について決定できるという事がある。もちろん現実問題として、そうしたシステムの持つΦの値を正確に計算することは難しいだろうが、しかし近似や手持ちの情報からの推理、といったものが考えられる。

現時点では、当論文で提示された理論的枠組みの妥当性もその予測のもっともらしさも。脳と意識の間にある基本的なだがややこしい類の関係についての現象学的知見を、どれほど整合的に説明していけるかにかかっている。脳の広い領域に渡って、刺激と計測を同時に行うような実験技術の進歩は、やがてこの理論の予測を、厳密な検証のまな板にのせるだろう。また、実際の脳の解剖学的な構造に関する大きいスケールでのモデルも重要だろう。こうしたモデルは異なる脳の構造や特定の神経生理学的パラメータが、情報統合の能力とどう関わっているのかについて、より詳細な計算を可能にしてくれるだろう[14][15][16]。最後に、当論文で提示された理論的枠組みの目的は、意識の量と質を決定する最も一般的なレベルでの必要十分条件について理解することであった。記憶と言語の関係、意識の高次の側面[17][18]とその自己との関係[19]など、生物学的・心理学的な文脈での意識研究において重要ないくつもの問題を理解するために、さらなる理論的発展が必要とされる。脳がどのようにして人間の意識を生み出しているかを完全に理解することは、いまだ間違いなく最高に難しい問題である。しかしもし理論的アプローチの補完を得ながらの実験的研究が行われていくならば、それがいつまでも科学の手の届かない所にいるという事もないのではないだろうか。

脚注

[編集]

i. The problem can also be posed in neural terms. When we see light, certain neurons in the retina turn on, as do other neurons higher up in the brain. Based on what we know, the activity of neurons in the retina is not directly associated with conscious experience of light and dark – they behave just like biological photodiodes that signal to higher centers. Somewhere in those higher centers, however, there seem to be some neurons whose activity is indeed tightly correlated with the conscious experience of light and dark. What is special about these higher neurons?

ii. ここでいう情報量はその風景がどれだけ複雑であるか、すなわちどれほど多くの物が見えているか、といった事とは関係ない。単にありえた他の選択肢の個数のみと関係する。

iii. This quantity is also called MIBcomplexity, for minimum information bipartition complexity. Note that, in most cases, the bipartitions for which the normalized value of EI will be at a minimum, everything else being equal, will be bipartitions that cut the system in two halves, i.e. midpartitions [2].

iv. Complexes can also be defined using mutual information instead of effective information, by exploiting the endogenous sources of variance that may exist in an isolated system [8]. A related measure could be constructed using the formalism of ε-machines [63]. Φ would then be related to the Hμ of the minimal ε-machine capable of reproducing the causal structure of a process, i.e. of the ε-machine that cannot be decomposed into a collection of lower Hμ ε-machines.

v. クオリア空間についての初等的な記述は[20]の著者によって13章で与えられている。

vi. While the entries in the matrix contain all the relevant informational relationships defining this space, they do not reveal necessarily how the space is organized in an economical and explicit manner. This may be done by employing procedures akin to multidimensional scaling although, since the matrix is asymmetrical and involves high-order terms (among subsets of elements), this may not be easy. Satisfactorily mapping the phenomenological differences between modalities, submodalities and dimensions onto the structure of qualia space will require that we thoroughly characterize and understand the latter.

vii. Of course, sensory afferents usually play a role in determining which particular conscious experience we have at any given time (they better do so, if experience is to have an adaptive relationship to the environment). Nevertheless, particular experiences can be triggered even when we are disconnected from the environment, as in dreams.

viii. Note also that a "pure" sensation of blue defines a point in this N-dimensional qualia space as much as the experience of a busy city street, full of different objects, of sounds, smells, associations, and reflections defines another point.

ix. However, certain areas such as the posterior cingulate cortex and precuneus, some lateral parietal areas, and associated paramedian thalamic nuclei, may constitute strategic crossroads for coordinating the interactions among different sensory maps and frames of reference concerning the body and the environment. Bilateral lesions to such areas may lead to a virtual breakdown of information integration in the thalamocortical system [22,24]. A global, persistent disruption of consciousness can also be produced by focal lesions of paramedian mesodiencephalic structures, which include the intralaminar thalamic nuclei. Most likely, such focal lesions are catastrophic because the strategic location and connectivity of paramedian structures ensure that distributed cortico-thalamic loops can work together as a system.

x. Statements about the lack of direct contributions to consciousness of basal ganglia loops need to be qualified due to the difficulty of evaluating the precise effects of their selective inactivation, as well as to the unreliability of introspective assessments about the richness of one's experience, especially after brain lesions. Similar considerations apply to brain structures not discussed here, such as the claustrum, the amygdala, and the basal forebrain.

xi. A similar kind of analysis could be applied to other neurological disconnection syndromes.

xii. An explanation in terms of reduced degrees of freedom may also apply to loss of consciousness in absence and other seizures, during which neural activity is extremely high and near-synchronous over many cortical regions (Tononi, unpublished results).

xiii. While we do not yet have such a tight case for the neural correlate of blue, we are close to it with motion sensitive cells in area MT and in somatosensory cortex, at least in monkeys [64].

xv. In this sense, a particular conscious experience, its meaning, and the underlying informational relationships within a complex end up being one and the same thing. Such internalistic, relationally defined meanings generally relate to and ultimately derive from entities in the world. To the extent that the brain has a long evolutionary history and is shaped by experience, it is clear that internally specified meanings (and conscious states) bear an adaptive relationship to what is out there.

謝辞

[編集]

示唆に富む議論をしてくれた次の人々に感謝する。Chiara Cirelli, Lice Ghilardi, Sean Hill, Marcello Massimini, and Olaf Sporns。

参考文献

[編集]
  1. ^ Tononi G, Edelman GM: Consciousness and complexity. 'Science' 1998, 282(5395):1846-1851.
  2. ^ a b Tononi G: Information measures for conscious experience. 'Arch Ital Biol' 2001, 139(4):367-371 引用エラー: 無効な <ref> タグ; name "IITC2"が異なる内容で複数回定義されています
  3. ^ Tononi G: Consciousness and the brain: Theoretical aspects. In Encyclopedia of Neuroscience. 3rd edition. Edited by: Adelman G, Smith, B. Elsevier; 2004.
  4. ^ Shannon CE, Weaver W: The mathematical theory of communication. Urbana: University of Illinois Press; 1963.
  5. ^ Sperry R: Consciousness, personal identity and the divided brain. Neuropsychologia 1984, 22(6):661-673.
  6. ^ a b Bachmann T: Microgenetic approach to the conscious mind. Amsterdam; Philadelphia: John Benjamins Pub. Co; 2000.
  7. ^ Poppel E, Artin T: Mindworks: Time and conscious experience. Boston, MA, US: Harcourt Brace Jovanovich, Inc; 1988.
  8. ^ Tononi G, Sporns O: Measuring information integration. BMC Neurosci 2003, 4(1):31.
  9. ^ Nagel T: What is the mind-body problem? Ciba Foundation Symposium' 1993, 174:1-7.
  10. ^ Buonomano DV, Merzenich MM: Cortical plasticity: from synapses to maps. Annu Rev Neurosci 1998, 21:149-186.
  11. ^ Zeki S: A vision of the brain. Oxford; Boston: Blackwell Scientific Publications; 1993.
  12. ^ Tononi G: Galileo e il fotodiodo. Bari: Laterza; 2003.
  13. ^ Tononi G, Sporns O, Edelman GM: A complexity measure for selective matching of signals by the brain. Proceedings of the National Academy of Sciences of the United States of America 1996, 93(8):3422-3427.
  14. ^ Tononi G, Sporns O, Edelman GM: Reentry and the problem of integrating multiple cortical areas: simulation of dynamic integration in the visual system. Cerebral Cortex 1992, 2(4):310-335.
  15. ^ Ascoli GA:Progress and perspectives in computational neuroanatomy. Anat Rec 1999, 257(6):195-207.
  16. ^ Lumer ED, Edelman GM, Tononi G: Neural dynamics in a model of the thalamocortical system.2. The role of neural synchrony tested through perturbations of spike timing. Cerebral Cortex 1997, 7(3):228-236.
  17. ^ Edelman GM: 'The remembered present: A biological theory of consciousness'. New York, NY, US: BasicBooks, Inc; 1989.
  18. ^ Damasio AR: The feeling of what happens: body and emotion in the making of consciousness. 1st edition. New York: Harcourt Brace; 1999.
  19. ^ Metzinger T: Being no one: the self-model theory of subjectivity. Cambridge, Mass: MIT Press; 2003.
  20. ^ Edelman GM, Tononi G: A universe of consciousness: how matter becomes imagination. 1st edition. New York, NY: Basic Books; 2000