| CARVIEW |
Sign in to view Nigel’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
San Francisco, California, United States
Sign in to view Nigel’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
9K followers
500+ connections
Sign in to view Nigel’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Nigel
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Nigel
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Nigel’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Articles by Nigel
-
Can Artificial Intelligence (AI) turn California’s summer of fear to one of hope?
Can Artificial Intelligence (AI) turn California’s summer of fear to one of hope?
While many people look forward to the summer months as a time to relax, here in California, we look to them with…
108
6 Comments -
Two key disconnects could get in the way of effective AI regulationJul 28, 2020
Two key disconnects could get in the way of effective AI regulation
Globally there is an enormous amount of focus on AI regulation. I am greatly concerned that unless industry…
95
1 Comment -
Car fires and AI ethicsMar 12, 2020
Car fires and AI ethics
On June 9, 1978 Ford recalled 1.5 million Ford Pintos and Mercury Bobcats — the largest recall in automotive history at…
42
2 Comments -
Winter is Coming… for AI?Feb 26, 2019
Winter is Coming… for AI?
My colleagues at EYQ just published a piece on whether AI will experience another “AI winter.” The fanfare around AI…
166
13 Comments -
What is intelligence without trust?Jan 29, 2019
What is intelligence without trust?
Artificial intelligence will eventually transform many enterprises and industries. But its pace of development has been…
75
4 Comments -
Unlocking AI's potential in the enterpriseSep 14, 2018
Unlocking AI's potential in the enterprise
Artificial Intelligence (AI) has enormous potential to transform industries and enterprises. However, the AI ecosystem…
119
-
Harnessing AI for Social Good: Joining MIT's SystemsThatLearn@CSAIL BoardSep 7, 2018
Harnessing AI for Social Good: Joining MIT's SystemsThatLearn@CSAIL Board
As the world begins to experiment and more rapidly implement AI across industries, many thoughtful researchers have…
58
4 Comments -
Should businesses take a top-down or bottom-up approach to artificial intelligence (AI) implementation?Dec 11, 2017
Should businesses take a top-down or bottom-up approach to artificial intelligence (AI) implementation?
Both — senior leadership should drive a top-down approach while also enabling a bottom-up approach. Technologists’…
58
1 Comment -
How machine learning projects go wrongNov 13, 2017
How machine learning projects go wrong
Machine learning (ML) and AI are still new. While the growth of open source tools and online courses have made them…
231
15 Comments -
AI: Threat or Opportunity?Oct 30, 2017
AI: Threat or Opportunity?
AI can be a bit scary and will be disruptive, but it can also create tremendous opportunities to provide value…
73
4 Comments
Activity
Sign in to view Nigel’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
-
Cynch AI continues to be one of the most compelling businesses I’ve worked on, with one of the most compelling teams I’ve worked with. We’ve had a…
Cynch AI continues to be one of the most compelling businesses I’ve worked on, with one of the most compelling teams I’ve worked with. We’ve had a…
Liked by Nigel Duffy
-
Grateful for the Cynch AI team and the work that has brought us to this moment. Today’s announcement reflects their dedication to helping small…
Grateful for the Cynch AI team and the work that has brought us to this moment. Today’s announcement reflects their dedication to helping small…
Liked by Nigel Duffy
Experience & Education
-
Cynch AI
*** *** *******
-
***** * *****
****** ** ****** ****** ** ********
-
******** ************ ***** *** ********* ** **********
***** ********** *******
-
********** ** *********** ***** ****
*** ******* ******** undefined
-
-
********** ******* ******
***** ***********
View Nigel’s full experience
See their title, tenure and more.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Publications
-
DICR: AI Assisted, Adaptive Platform for Contract Review
Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)
An end-to-end, modular, and trainable system that automates the mundane aspects of document review and allows humans to perform the validation.
Other authorsSee publication -
Evolving Deep Neural Networks
Artificial Intelligence in the Age of Neural Networks and Brain Computing
-
Identification of New Inhibitors of Protein Kinase R Guided by Statistical Modeling
Bioorganic & medicinal chemistry letters
-
New Ranking Algorithms for Parsing and Tagging: Kernels over Discrete Structures, and the Voted Perceptron
Proceedings of the Association for Computational Linguistics (ACL)
See publicationEarly work on structured prediction, became one of the dominant approaches to NER (prior to deep learning).
-
Boosting Methods for Regression
Machine Learning Journal
Gradient Boosting applied to regression. One of the first papers extending the Gradient Boosting framework beyond classification.
Other authors -
-
Convolution Kernels for Natural Language
NeurIPS 2001
See publicationEarly work on structured prediction applied to natural language parsing.
-
Support vector machine classification and validation of cancer tissue samples using microarray expression data
Bioinformatics
See publicationEarly application of Support Vector Machines to gene expression array data.
-
Leveraging for Regression
Conference on Computational Learning Theory 2000 (COLT)
See publicationGradient Boosting extended to regression.
-
Potential Boosters
NeurIPS 1999
See publicationOne of the first papers on the Gradient Boosting framework. Examines requirements on the potential function to achieve Strong PAC Learning.
-
A Geometric Approach to Leveraging Weak Learners
European conference on computational learning theory 1999 (EuroColt)
See publicationThe first published paper to describe what has become known as Gradient Boosting. Reinterpreted AdaBoost as gradient descent over a space of margins. This allowed for generalizations along a variety of axes.
Patents
-
Methods for molecular property modeling using virtual data
Filed US 20050278124A1
Covers the ideas of what is now known as "Data Programming" in the context of predicting the properties of molecules.
Other inventorsSee patent -
Modeling biological effects of molecules using molecular property models
US 7856321
Recommendations received
7 people have recommended Nigel
Join now to viewView Nigel’s full profile
-
See who you know in common
-
Get introduced
-
Contact Nigel directly
Other similar profiles
Explore more posts
-
Justin Gerrard
Various Startups • 20K followers
One of the biggest debates in AI is whether the value in the ecosystem accrues to the LLMs or application layer. In this interview, Perplexity CEO Aravind Srinivas discusses why they focused on building an AI-powered application rather than developing their own foundational model. The logic: Open-source progress and plummeting model costs are creating a massive opportunity for application-layer innovation. Srinivas predicts API costs will drop 10-100x in the next few years, making proprietary models less of a moat. Instead of competing in a capital-intensive arms race, Perplexity is doubling down on user experience, differentiation, and value creation at the application level. This increasingly feels like a smart bet, especially in the consumer space because history shows that application companies often capture the most value in emerging tech markets. Some of the best examples of this are Google vs. early search engines, or Netflix vs. DVD rental stores. What do you think, will the biggest AI winners be application companies or model providers? Lmk below! 👇🏾 ------ 👋🏾 Want to stay on top of tech trends and news? Follow me here: Justin Gerrard Gerrard ♻️ Repost if you think someone in your network would benefit! #perplexity
478
12 Comments -
Jesse Johnson
Merelogic • 5K followers
Great blog post by Eric Ma about how biotech teams can better integrate public datasets into their decision making processes. I love how he includes a concrete, specific description of what makes the problem so hard, and also makes practical recommendations for what to do about it. https://lnkd.in/eEj98PNu
71
3 Comments -
Andrew Ng
DeepLearning.AI • 2M followers
I’ve noticed that many GenAI application projects put in automated evaluations (evals) of the system’s output probably later — and rely on humans to manually examine and judge outputs longer — than they should. This is because building evals is viewed as a massive investment (say, creating 100 or 1,000 examples, and designing and validating metrics) and there’s never a convenient moment to put in that up-front cost. Instead, I encourage teams to think of building evals as an iterative process. It’s okay to start with a quick-and-dirty implementation (say, 5 examples with unoptimized metrics) and then iterate and improve over time. This allows you to gradually shift the burden of evaluations away from humans and toward automated evals. I wrote previously in The Batch about the importance and difficulty of creating evals. Say you’re building a customer-service chatbot that responds to users in free text. There’s no single right answer, so many teams end up having humans pore over dozens of example outputs with every update to judge if it improved the system. While techniques like LLM-as-judge are helpful, the details of getting this to work well (such as what prompt to use, what context to give the judge, and so on) are finicky to get right. All this contributes to the impression that building evals requires a large up-front investment, and thus on any given day, a team can make more progress by relying on human judges than figuring out how to build automated evals. I encourage you to approach building evals differently. It’s okay to build quick evals that are only partial, incomplete, and noisy measures of the system’s performance, and to iteratively improve them. They can be a complement to, rather than replacement for, manual evaluations. Over time, you can gradually tune the evaluation methodology to close the gap between the evals’ output and human judgments. For example: - It’s okay to start with very few examples in the eval set, say 5, and gradually add to them over time — or subtract them if you find that some examples are too easy or too hard, and not useful for distinguishing between the performance of different versions of your system. - It’s okay to start with evals that measure only a subset of the dimensions of performance you care about, or measure narrow cues that you believe are correlated with, but don’t fully capture, system performance. For example if, at a certain moment in the conversation, your customer-support agent is supposed to (i) call an API to issue a refund and (ii) generate an appropriate message to the user, you might start off measuring only whether or not it calls the API correctly and not worry about the message. Or if, at a certain moment, your chatbot should recommend a specific product, a basic eval could measure whether or not the chatbot mentions that product without worrying about what it says about it. [Truncated due to length limit. Full text: https://lnkd.in/gygj3y7w ]
4,069
179 Comments -
Pawan Jindal
Prompt Opinion • 6K followers
MCP - 6 Key Questions Everyone's asking. 1. What does MCP 𝘳𝘦𝘢𝘭𝘭𝘺 mean? Remember, LLMs are just next-word predictors based on the prompt. The more context you give them along with the prompt, the better they get at "predicting." MCP stands for Model Context Protocol. MCP is a process (Protocol) that standardizes the process for providing additional Context to LLMs through dynamic tools. The Model in MCP usually refers to a large language model (LLM), but technically, it could be any AI model. 2. Isn't that what RAG does? RAG (Retrieval-Augmented Generation) pulls data from vector stores based on user prompts and injects it into the prompt. Think about searching large documents. MCP is primarily about calling tools, including APIs, often involving real-time computation, decisions, or even user actions. RAG = stuff the model 𝘮𝘪𝘨𝘩𝘵 need MCP = ask for precisely what it needs, 𝘸𝘩𝘦𝘯 it needs it 3. Why is this suddenly a big deal? Until now, everyone has been inventing their one-off approaches: OpenAI functions, LangChain tools, and custom wrappers. MCP provides a common “USB-C”-like standard for providing context. Developers don't need to reinvent discovery, permissions, or calling mechanisms for different models. It opens up a real ecosystem of interoperable tools and AI agents. 4. Why a new protocol? What's wrong with REST APIs? REST is stateless. One call, one response. MCP is designed to support persistent sessions, bidirectional communication, and tool discovery via JSON-RPC over streaming transports like stdio or SSE. This enables a more streamlined experience. REST APIs have to be structured. MCP allows this ability to "discover" tools by the LLM based on just descriptions. MCP servers are usually implemented on top of an existing API. 5. OMG! LLM calls my APIs? Is that secure? This is a common misconception. The app (aka host) connects to the LLM, and registers available tools via an "MCP server." The LLM decides when to use a tool and sends a request back to the host. The host then calls the MCP server, which runs the tool or API. The result is sent back to the model as additional context, helping it generate a smarter response. This round trip happens in-between user interactions in the background. LLM never directly connects to the APIs or the tools. That being said, it is best practice to make sure that the MCP server is never given more permissions than is needed by the LLM. The app and the MCP server still control how, when, and what to share as context with the LLM. 6. What does this mean for healthcare? MCP combined with FHIR can fill many gaps in using AI, especially LLMs, in healthcare. We are still in the early stages of MCP evolution. Expect many new tools, especially around FHIR, to start coming up. We at Darena Solutions | MeldRx are also exploring how to leverage this in our FHIR platform, especially around CDS. Reach out to us if you are interested in learning more.
53
21 Comments -
Nagaraja Srivatsan
Endpoint Clinical • 9K followers
🧠💬 Small Language Models or Smart Prompting? Making the Right AI Choice As AI adoption deepens in clinical development, teams face a critical question: 👉 Should we fine-tune a small language model (SLM)? Or... 👉 Should we just prompt a large language model (LLM) better? Here’s a simplified take—drawing from Andrew Ng’s latest newsletter and real-world practices 👇 🔍 Prompting: Underrated, Underused, Overpowered 🪄 Use When: You need quick iteration and low complexity You don’t have much training data You want to avoid engineering overhead 📦 Pros: Faster time to value Easier to test and deploy Perfect for most clinical trial applications (e.g., protocol summarization, site feasibility scoring) 💡 Example: A clinical study team uses GPT-4 to score inclusion/exclusion criteria matches with a mega-prompt—no fine-tuning needed! 🛠️ Fine-Tuning SLMs: Powerful, But Heavy 🎯 Use When: Accuracy must exceed 95% Style or tone must be domain-specific (e.g., regulatory, patient-facing) You're scaling up and want lower cost or latency ⚙️ Pros: Personalization (e.g., model that writes like your medical affairs team) Faster inference once deployed Works well with limited examples (<100!) ⚠️ Cons: Harder to implement and maintain Data collection, version control, and retraining needed 💡 Example: Fine-tune a 7B model on adjudication letters to draft clean, consistent summaries in pharmacovigilance workflows. 🧪 In Clinical Development, Ask These 3 Questions Before You Fine-Tune: 🧾 Can you express the SOP in a prompt clearly? 💬 Is your LLM accurate enough with few-shot examples? 🧠 Are you optimizing for speed, cost, or tone? If your answer is no across the board—then fine-tune away. But in 75% of cases, prompting or Retrieval-Augmented Generation (RAG) is more than enough (source: Andrew Ng). 🔮 The Future of AI in Clinical Workflows? Expect LLMs to assist protocol writing, eligibility determination, patient query triage—all using smart prompting. But specialized agents—fine-tuned on in-house SOPs and communication styles—will emerge for high-accuracy tasks. 📌 Takeaway Before you reach for fine-tuning, try prompt engineering harder. The simplest path is often the most maintainable one—especially when building responsible AI in regulated domains like life sciences. 💬 What’s your take? Fine-tune or prompt smart? Drop your thoughts below ⬇️ #GenerativeAI #PromptEngineering #LLMs #FineTuning #ClinicalDevelopment #AIProductStrategy #AndrewNg #AgenticAI #AIInLifeSciences https://lnkd.in/gSJaSf5w
42
3 Comments -
Romy Alusi Hussain
Yale University • 3K followers
A fascinating - and brilliant - advancement in #AI. This is not just another large language model - DeepSeek is outperforming #GPT 4o and other models on reasoning and math tests with a fraction of the GPU overhead. Using traditional reinforcement learning, the team at DeepSeek AI has taken steps toward improved reasoning capabilities without any supervised data. They evolve through a purely reward-driven, self-referential process - similar to the way children learn to process the world around them - eventually encountering positive or negative stimuli and re-learning their prompts accordingly. This could usher in a new era of truly democratized AI-for-all, without the gatekeeping by private organizations. In many ways, this feels like the future OpenAI set about to create, finally coming to pass... https://lnkd.in/exdt4h8E
38
5 Comments -
Ori Goshen
AI21 Labs • 8K followers
Current AI systems are fundamentally broken. AI Builders are forced to choose between two flawed approaches: They can either “prompt and pray” or build static, hard coded chains. But neither approach addresses the inherent unreliability of LLMs and LRMs, which are particularly problematic for mission-critical enterprise workflows. We believe that the next phase in AI requires a fundamental shift to AI Systems. To read more, check out the recent article I authored with my co-founder and co-CEO Yoav Shoham and stay tuned for our upcoming announcement at HumanX on March 10th that details how AI21 Labs will be solving this challenge for the enterprise. https://lnkd.in/dh2_AGqZ
150
6 Comments -
Alexis Aiello
Tempus AI • 3K followers
Thrilled to share some incredible news from the team at Tempus AI! We’ve just announced a $200M multi-year collaboration with AstraZeneca and Pathos to co-develop a multimodal foundation model in oncology that will help shape the future of cancer care. This partnership combines the power of Tempus' AI-enabled platform and real-world data with Pathos’ cutting-edge capabilities and AstraZeneca’s deep expertise in oncology. It's a huge moment, not just for Tempus, but for patients everywhere. https://lnkd.in/gSav4hRB
104
3 Comments -
Peter Kraft
DBOS, Inc. • 6K followers
How can durable workflows help determine drug efficacy? Separating causation from correlation is a critical challenge in biomedical research. cStructure.io built an platform to solve this leveraging AI-generated causal graphs, but they ran into a problem: unreliable AI workflows. The technical challenge? Both causal graph generation and statistical analysis are complex, long-running workflows prone to multiple failure modes: - LLM calls can fail, hit rate limits, or return malformed outputs - Statistical analysis involves orchestrating containers in customer environments with potential connection issues, data problems, and convergence failures Initially, cStructure evaluated dedicated workflow orchestrators like Temporal and Dagster, but these required extensive rearchitecting of existing code. DBOS offered a different approach—they could integrate durable workflows directly into their existing pipelines without major structural changes. To scale causal graph generation and analysis, cStructure uses: - Durable workflows to ensure complex AI pipelines and statistical analyses can seamlessly recover from failures. - Enhanced observability leveraging the graph state information automatically checkpointed by DBOS workflows to quickly identify the root cause of failures or misbehaviors. - Postgres to store both sensitive biomedical data and DBOS workflow state, consolidating their data in existing infrastructure rather than adding new systems to secure The result: simpler error-handling, accelerated developer velocity, and easier compliance.
24
1 Comment -
Anish Athalye
OpenAI • 6K followers
Is AI any good at evaluating AI? Is it turtles all the way down? We benchmarked evaluation models like LLM-as-a-judge, HHEM, Prometheus across 6 RAG applications. Evaluation models work surprisingly well in practice. Hoping to see more of these real-time reference-free evaluations to give end users more confidence in the outputs of AI applications.
51
5 Comments -
Dr. Eva-Maria Hempe
NVIDIA • 10K followers
How well are LLMs doing in medical applications? Stanford's Human-Centered AI (HAI) institute benchmarked 6 models for some common medical use cases. While there is no clear winner across the board, bigger seems better with GTP-4o having the best results across most applications - while also coming in last at a few ones (like Documenting Diagnostic Reports or Recording Research Processes). Or as the authors put it: "Small models, while adequate for well-structured tasks, struggled with tasks requiring domain expertise, particularly in mental health counseling and medical knowledge assessment. Notably, open-ended text generation produced comparable BertScore-F1 ranges across model sizes." What's your take on these results?
44
6 Comments
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top contentAdd new skills with these courses
View Nigel’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
LinkedIn is better on the app
Don’t have the app? Get it in the Microsoft Store.
Open the app