4 Comments
User's avatar
Rohit Kamath's avatar

Yes, asymmetric intelligence can bias one in favour of certain actions. In neuroscience, we see this constantly; the public 'consensus' on disease pathology often lags years behind what’s being discovered in private labs or failed clinical trials. If pharma is sitting on the 'missing pieces' of the map, one could surmise that their AI would appear to 'break' when compared to public literature, it is working with a more complete puzzle than the rest of us.

Looking forward to that post on biological reasoning.

Im curious if you think those 'asymmetric' data moats are the key to teaching AI to reason, or if we still need a shift in how the models actually 'think' about the data they have.

Rohit Kamath's avatar

"whoever helps scientific organizations encode their own ontology, their own context, and their own judgment without surrendering them to a vendor-controlled worldview."

Are you talking about personalised AIs per organisation trained on an internal database, judgement, and context?

Someone who builds an AI or a shell

easily ingestable into a scientific org?

How would that work?

Great article!

Rohit Kamath's avatar

I hadn't thought about AI in biology from this way Christopher, great article!

"But as soon as you diverge from consensus, everything starts to break."

This is quite an ironical statement when talking about applying AI to biology. Biology has advanced precisely because researchers repeatedly diverged from the scientific consensus.

Especially with pharma, more likely to push and stick to dogma which percolates into the medical and healthcare industries.

One reason why I think alternative or non dogmatic therapies or approaches (even when tested by scientists) are not picked up or accepted in the aforementioned 2 industries.

How do you think AI or those building it can bake a sense of critical thinking into AI or does it (for now) remain a predictive and/or pattern finding machine Chris?

Christopher Li's avatar

I think the non-consensus part isn't that they are anti-dogma. It's from having asymmetric information. The reality is that pharma sits on mountains of proprietary data that, in working with them directly, is telling a different story than published literature.

I have some predictions about where AI in biological reasoning will evolve, perhaps a post for another time.