Skip to content

Commit

Permalink
update local intro
Browse files Browse the repository at this point in the history
  • Loading branch information
souzatharsis committed Dec 21, 2024
1 parent 76ae572 commit 5733cb3
Show file tree
Hide file tree
Showing 8 changed files with 18 additions and 18 deletions.
Binary file modified tamingllms/_build/.doctrees/environment.pickle
Binary file not shown.
Binary file modified tamingllms/_build/.doctrees/notebooks/local.doctree
Binary file not shown.
8 changes: 4 additions & 4 deletions tamingllms/_build/html/_sources/notebooks/local.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -23,13 +23,13 @@
"\n",
"Running LLMs locally versus using cloud APIs offers several important advantages.\n",
"\n",
"Privacy-sensitive data processing is one of the primary reasons for running LLMs locally. Organizations handling medical records must comply with HIPAA regulations that require data to remain on-premise. Similarly, businesses processing confidential documents and intellectual property, as well as organizations subject to GDPR and other privacy regulations, need to maintain strict control over their data processing pipeline.\n",
"Privacy concerns are a key driver for running LLMs locally. Individual users may want to process personal documents, photos, emails, and chat messages without sharing sensitive data with third parties. For enterprise use cases, organizations handling medical records must comply with HIPAA regulations that require data to remain on-premise. Similarly, businesses processing confidential documents and intellectual property, as well as organizations subject to GDPR and other privacy regulations, need to maintain strict control over their data processing pipeline.\n",
"\n",
"Cost considerations become significant when operating at scale. Organizations running high-volume applications can face prohibitive API costs with cloud-based solutions. Development and testing environments that require frequent model interactions, educational institutions supporting multiple student projects, and research initiatives involving extensive model experimentation can all achieve substantial cost savings through local deployment.\n",
"Cost considerations are another key advantage of local deployment. Organizations can better control expenses by matching model capabilities to their specific needs rather than paying for potentially excessive cloud API features. For high-volume applications, this customization and control over costs becomes especially valuable compared to the often prohibitive per-request pricing of cloud solutions.\n",
"\n",
"Applications with stringent latency requirements form another important category. Real-time systems where network delays would be unacceptable, edge computing scenarios demanding quick responses, and interactive applications requiring sub-second performance all benefit from local deployment. This extends to embedded systems in IoT devices where cloud connectivity might be unreliable or impractical.\n",
"Applications with stringent latency requirements form another important category. Real-time systems where network delays would be unacceptable, edge computing scenarios demanding quick responses, and interactive applications requiring sub-second performance all benefit from local deployment. This extends to embedded systems in IoT devices where cloud connectivity might be unreliable or impractical. Further, the emergence of Small Language Models (SLMs) has made edge deployment increasingly viable, enabling sophisticated language capabilities on resource-constrained devices like smartphones, tablets and IoT sensors. \n",
"\n",
"Finally, local deployment enables deeper customization and fine-tuning capabilities. Organizations can perform specialized domain adaptation through model modifications, experiment with different architectures and parameters, and integrate models with proprietary systems and workflows. This flexibility is particularly valuable for developing novel applications that require direct model access and manipulation."
"Running locally also enables fine-grained optimization of resource usage and model characteristics based on target use case. Organizations can perform specialized domain adaptation through model modifications, experiment with different architectures and parameters, and integrate models with proprietary systems and workflows. This flexibility is particularly valuable for developing novel applications that require direct model access and manipulation. "
]
},
{
Expand Down
8 changes: 4 additions & 4 deletions tamingllms/_build/html/notebooks/local.html
Original file line number Diff line number Diff line change
Expand Up @@ -277,10 +277,10 @@ <h1><a class="toc-backref" href="#id171" role="doc-backlink"><span class="sectio
<section id="introduction">
<h2><a class="toc-backref" href="#id172" role="doc-backlink"><span class="section-number">8.1. </span>Introduction</a><a class="headerlink" href="#introduction" title="Permalink to this heading"></a></h2>
<p>Running LLMs locally versus using cloud APIs offers several important advantages.</p>
<p>Privacy-sensitive data processing is one of the primary reasons for running LLMs locally. Organizations handling medical records must comply with HIPAA regulations that require data to remain on-premise. Similarly, businesses processing confidential documents and intellectual property, as well as organizations subject to GDPR and other privacy regulations, need to maintain strict control over their data processing pipeline.</p>
<p>Cost considerations become significant when operating at scale. Organizations running high-volume applications can face prohibitive API costs with cloud-based solutions. Development and testing environments that require frequent model interactions, educational institutions supporting multiple student projects, and research initiatives involving extensive model experimentation can all achieve substantial cost savings through local deployment.</p>
<p>Applications with stringent latency requirements form another important category. Real-time systems where network delays would be unacceptable, edge computing scenarios demanding quick responses, and interactive applications requiring sub-second performance all benefit from local deployment. This extends to embedded systems in IoT devices where cloud connectivity might be unreliable or impractical.</p>
<p>Finally, local deployment enables deeper customization and fine-tuning capabilities. Organizations can perform specialized domain adaptation through model modifications, experiment with different architectures and parameters, and integrate models with proprietary systems and workflows. This flexibility is particularly valuable for developing novel applications that require direct model access and manipulation.</p>
<p>Privacy concerns are a key driver for running LLMs locally. Individual users may want to process personal documents, photos, emails, and chat messages without sharing sensitive data with third parties. For enterprise use cases, organizations handling medical records must comply with HIPAA regulations that require data to remain on-premise. Similarly, businesses processing confidential documents and intellectual property, as well as organizations subject to GDPR and other privacy regulations, need to maintain strict control over their data processing pipeline.</p>
<p>Cost considerations are another key advantage of local deployment. Organizations can better control expenses by matching model capabilities to their specific needs rather than paying for potentially excessive cloud API features. For high-volume applications, this customization and control over costs becomes especially valuable compared to the often prohibitive per-request pricing of cloud solutions.</p>
<p>Applications with stringent latency requirements form another important category. Real-time systems where network delays would be unacceptable, edge computing scenarios demanding quick responses, and interactive applications requiring sub-second performance all benefit from local deployment. This extends to embedded systems in IoT devices where cloud connectivity might be unreliable or impractical. Further, the emergence of Small Language Models (SLMs) has made edge deployment increasingly viable, enabling sophisticated language capabilities on resource-constrained devices like smartphones, tablets and IoT sensors.</p>
<p>Running locally also enables fine-grained optimization of resource usage and model characteristics based on target use case. Organizations can perform specialized domain adaptation through model modifications, experiment with different architectures and parameters, and integrate models with proprietary systems and workflows. This flexibility is particularly valuable for developing novel applications that require direct model access and manipulation.</p>
</section>
<section id="local-models-considerations">
<h2><a class="toc-backref" href="#id173" role="doc-backlink"><span class="section-number">8.2. </span>Local Models Considerations</a><a class="headerlink" href="#local-models-considerations" title="Permalink to this heading"></a></h2>
Expand Down
2 changes: 1 addition & 1 deletion tamingllms/_build/html/searchindex.js

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion tamingllms/_build/jupyter_execute/markdown/intro.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
"id": "299e545f",
"id": "88b3f8f7",
"metadata": {},
"source": [
"(intro)=\n",
Expand Down
8 changes: 4 additions & 4 deletions tamingllms/_build/jupyter_execute/notebooks/local.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -23,13 +23,13 @@
"\n",
"Running LLMs locally versus using cloud APIs offers several important advantages.\n",
"\n",
"Privacy-sensitive data processing is one of the primary reasons for running LLMs locally. Organizations handling medical records must comply with HIPAA regulations that require data to remain on-premise. Similarly, businesses processing confidential documents and intellectual property, as well as organizations subject to GDPR and other privacy regulations, need to maintain strict control over their data processing pipeline.\n",
"Privacy concerns are a key driver for running LLMs locally. Individual users may want to process personal documents, photos, emails, and chat messages without sharing sensitive data with third parties. For enterprise use cases, organizations handling medical records must comply with HIPAA regulations that require data to remain on-premise. Similarly, businesses processing confidential documents and intellectual property, as well as organizations subject to GDPR and other privacy regulations, need to maintain strict control over their data processing pipeline.\n",
"\n",
"Cost considerations become significant when operating at scale. Organizations running high-volume applications can face prohibitive API costs with cloud-based solutions. Development and testing environments that require frequent model interactions, educational institutions supporting multiple student projects, and research initiatives involving extensive model experimentation can all achieve substantial cost savings through local deployment.\n",
"Cost considerations are another key advantage of local deployment. Organizations can better control expenses by matching model capabilities to their specific needs rather than paying for potentially excessive cloud API features. For high-volume applications, this customization and control over costs becomes especially valuable compared to the often prohibitive per-request pricing of cloud solutions.\n",
"\n",
"Applications with stringent latency requirements form another important category. Real-time systems where network delays would be unacceptable, edge computing scenarios demanding quick responses, and interactive applications requiring sub-second performance all benefit from local deployment. This extends to embedded systems in IoT devices where cloud connectivity might be unreliable or impractical.\n",
"Applications with stringent latency requirements form another important category. Real-time systems where network delays would be unacceptable, edge computing scenarios demanding quick responses, and interactive applications requiring sub-second performance all benefit from local deployment. This extends to embedded systems in IoT devices where cloud connectivity might be unreliable or impractical. Further, the emergence of Small Language Models (SLMs) has made edge deployment increasingly viable, enabling sophisticated language capabilities on resource-constrained devices like smartphones, tablets and IoT sensors. \n",
"\n",
"Finally, local deployment enables deeper customization and fine-tuning capabilities. Organizations can perform specialized domain adaptation through model modifications, experiment with different architectures and parameters, and integrate models with proprietary systems and workflows. This flexibility is particularly valuable for developing novel applications that require direct model access and manipulation."
"Running locally also enables fine-grained optimization of resource usage and model characteristics based on target use case. Organizations can perform specialized domain adaptation through model modifications, experiment with different architectures and parameters, and integrate models with proprietary systems and workflows. This flexibility is particularly valuable for developing novel applications that require direct model access and manipulation. "
]
},
{
Expand Down
8 changes: 4 additions & 4 deletions tamingllms/notebooks/local.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -23,13 +23,13 @@
"\n",
"Running LLMs locally versus using cloud APIs offers several important advantages.\n",
"\n",
"Privacy-sensitive data processing is one of the primary reasons for running LLMs locally. Organizations handling medical records must comply with HIPAA regulations that require data to remain on-premise. Similarly, businesses processing confidential documents and intellectual property, as well as organizations subject to GDPR and other privacy regulations, need to maintain strict control over their data processing pipeline.\n",
"Privacy concerns are a key driver for running LLMs locally. Individual users may want to process personal documents, photos, emails, and chat messages without sharing sensitive data with third parties. For enterprise use cases, organizations handling medical records must comply with HIPAA regulations that require data to remain on-premise. Similarly, businesses processing confidential documents and intellectual property, as well as organizations subject to GDPR and other privacy regulations, need to maintain strict control over their data processing pipeline.\n",
"\n",
"Cost considerations become significant when operating at scale. Organizations running high-volume applications can face prohibitive API costs with cloud-based solutions. Development and testing environments that require frequent model interactions, educational institutions supporting multiple student projects, and research initiatives involving extensive model experimentation can all achieve substantial cost savings through local deployment.\n",
"Cost considerations are another key advantage of local deployment. Organizations can better control expenses by matching model capabilities to their specific needs rather than paying for potentially excessive cloud API features. For high-volume applications, this customization and control over costs becomes especially valuable compared to the often prohibitive per-request pricing of cloud solutions.\n",
"\n",
"Applications with stringent latency requirements form another important category. Real-time systems where network delays would be unacceptable, edge computing scenarios demanding quick responses, and interactive applications requiring sub-second performance all benefit from local deployment. This extends to embedded systems in IoT devices where cloud connectivity might be unreliable or impractical.\n",
"Applications with stringent latency requirements form another important category. Real-time systems where network delays would be unacceptable, edge computing scenarios demanding quick responses, and interactive applications requiring sub-second performance all benefit from local deployment. This extends to embedded systems in IoT devices where cloud connectivity might be unreliable or impractical. Further, the emergence of Small Language Models (SLMs) has made edge deployment increasingly viable, enabling sophisticated language capabilities on resource-constrained devices like smartphones, tablets and IoT sensors. \n",
"\n",
"Finally, local deployment enables deeper customization and fine-tuning capabilities. Organizations can perform specialized domain adaptation through model modifications, experiment with different architectures and parameters, and integrate models with proprietary systems and workflows. This flexibility is particularly valuable for developing novel applications that require direct model access and manipulation."
"Running locally also enables fine-grained optimization of resource usage and model characteristics based on target use case. Organizations can perform specialized domain adaptation through model modifications, experiment with different architectures and parameters, and integrate models with proprietary systems and workflows. This flexibility is particularly valuable for developing novel applications that require direct model access and manipulation. "
]
},
{
Expand Down

0 comments on commit 5733cb3

Please sign in to comment.