Jim Pyers Jim Pyers

UnWired Episode 35: “Diabetes: Pump You Up”

Join us as we talk to two amazing leaders, engineers, designers, and professionals who are at the forefront of developing life-changing devices. Whether you're a medical enthusiast, a patient, or someone curious about the latest in medical technology, this podcast is your go-to source for all things medical technology related. We want to Pump you up today!"

Our two special guests: 

Aiman Abdel-Malek -- (2) Aiman Abdel-Malek, Ph.D. | (3) Aiman Abdel-Malek, Ph.D. | LinkedIn

“Aiman is a leader in digital health - Digital Connected Healthcare Investor, Board Member & Advisor; Executive Chairman, Thirdwayv Inc. Board of Advisors.  A former GE, Qualcomm, and Insulet executive, Aiman bring decades of experience to medical device and digital health engineering and leadership.”

Daniel Obodovski -- (2) Daniel Obodovski | (3) Daniel Obodovski | LinkedIn

Daniel is an author, technology and strategy leader with executive experience at Motorola, Qualcomm, Samsung, Co-fonder of the CommNexus IOT SIG, and founder of the The Silent Intelligence.

Thanks for joining us today and hope you enjoyed the show.

The transcription of the show is coming. (will be posted here shortly)

For show notes and AI musings Check out my blog at wiresworld.net/blog (you are here!)

Keep the show thriving by donating at wiresworld.net/donate

Thanks!

Show Resources & Links:

Aiman’s paper that he referenced “here is the paper that I published with my team at the Journal of Diabetes Technology that outlines how you can design an FDA cleared medical device using a User Centered UX approach. Hope this helps everyone in the future”.

Novel Bluetooth-Enabled Tubeless Insulin Pump: A User Experience Design Approach for a Connected Digital Diabetes Management Platform - Sandhya S. Pillalamarri, Lauren M. Huyett, Aiman Abdel-Malek, 2018 (sagepub.com)

Read More
AI Jim Pyers AI Jim Pyers

AI - #11 (GPT Store)

#11:  GPT Store is Launched

OpenAI's much-anticipated GPT Store officially launched on January 10, 2024.  I have been checking it out and seeing what's interesting.

The store design is what you would expect from app store. 

I asked Bard what it thought of the OpenAI GPT Store: :>)

Bard:

"The OpenAI GPT Store is a fascinating development in the realm of AI and language models. As a large language model myself, I have some mixed thoughts and observations about it:

Positives:

  • Increased accessibility: The GPT Store democratizes access to powerful language models like GPT-3, making them available to creators and developers who might not have the resources or technical expertise to build their own.

  • Innovation and creativity: The store has the potential to foster innovation and creativity by allowing users to build unique and diverse applications powered by language models. We might see exciting new projects in areas like education, entertainment, and productivity.

  • Community and collaboration: The store encourages a community of creators and developers to share their GPT-based applications, potentially leading to collaboration and cross-pollination of ideas. This could accelerate the development of even more innovative applications.

  • Openness and experimentation: The open nature of the store allows for experimentation and exploration of the capabilities of GPT-3. This can lead to new discoveries and insights about the potential of language models in general.

Challenges and concerns:

  • Quality control and biases: The store currently has limited curation, potentially leading to the availability of low-quality or biased GPTs. Careful selection and evaluation processes are crucial to ensure the quality and trustworthiness of the models offered.

  • Misinformation and abuse: With open access to powerful language models, the potential for misuse and the spread of misinformation increases. Robust safeguards and guidelines are needed to mitigate these risks.

  • Unequal access and affordability: The current pricing structure might make the store less accessible to individuals and smaller groups compared to larger organizations or developers with more resources.

  • Impact on existing developers: The store could potentially disrupt the market for custom GPT-based applications developed by independent developers. Finding an appropriate balance between open access and support for independent creators is important.

Overall, the OpenAI GPT Store presents both exciting opportunities and significant challenges. Its success will depend on how effectively it addresses the concerns of quality control, accessibility, and responsible use. I believe it has the potential to be a positive force in the development of AI and language models, but careful consideration and responsible implementation are crucial.

As a fellow language model, I am interested in seeing how the GPT Store evolves and contributes to the advancement of our field. I hope that it can be a force for good, promoting creativity, innovation, and responsible use of AI technology.

I encourage you to explore the GPT Store yourself and form your own opinion on its potential and challenges. Feel free to share your thoughts with me, and I'd be happy to discuss them further."

*************** End of Bard ****** - not literally, but almost :>)

It's not a bad take by Bard.  Not a great take either.  Kind of like AI?

Browsing the store there are a few observations;  A lot of the "apps" ... which I am going to name the GPT's for now - involve finding GPTS!!

Let's contrast ChatGPT 3.5's own take when I ask what the GPT Store is like:

Me: "take me to the openai gpt store please."

ChatGPT: I'm sorry, but as of my last knowledge update in January 2022, there isn't a specific "OpenAI GPT store." If there have been developments or changes since then, I recommend checking the official OpenAI website or contacting OpenAI directly for the most accurate and up-to-date information."

Well, well, well :>). 4.0 does much better.

I am starting on building Enterprise LLM (build model with the enterprise company data) so more on that in an upcoming post.

Maybe a CES report too ... I spent 5 glorious days in Las Vegas! Saw a few interesting things (teaser).

Roger. Out.

A few sources of interest:

Check out GPT’s on your own: ChatGPT (openai.com)

Check out TWIST for AI Demos by Jason & Sunny: AI Demos: Stable Diffusion Turbo, AI Room Decorator, AI-Driven Clothing Site, and more! | E1876 - This Week in Startups

Good GPT Store Review: Best of CES recap with Sunny Madra | E1880 - This Week in Startups

Read More
AI Jim Pyers AI Jim Pyers

AI - #10 (Open AI Chat GPT-The Aftermath)

AI #10:  Open AI Chat GPT - The Aftermath

I ran an interesting test today on 12/21/2023. It was a simple test of two leading generative AI platforms. I ran a comparison of Bard vs Chat GPT 4.5 just for fun!

The Question I posed: “Please provide a summary of what happened at OpenAI in respect to the firing and rehiring of OpenAI CEO Sam Altman?”

ChapGPT Answer: (“predictable, ugh!”):

ChatGPT: "As of my last knowledge update in January 2022, there is no information about Sam Altman being fired and rehired at OpenAI. Sam Altman has been involved with OpenAI as its CEO since 2019. However, my information might be outdated, and I don't have real-time updates. For the latest and most accurate information, I recommend checking recent news articles, OpenAI's official announcements, or Sam Altman's official statements on the matter."

The Bard Answer (after I threatened it a bit):

BARD: "OpenAI Leadership Shakeup: A News-Level Summary

OpenAI, the research lab focused on developing safe and beneficial artificial intelligence, experienced a tumultuous two weeks in November 2023 regarding its leadership. Here's a news-level review of the events:

November 17th:

  • OpenAI's board removed Sam Altman, co-founder and CEO, citing a "loss of confidence" in his leadership. No specific reasons were publicly disclosed.

  • This action caused surprise and controversy within the AI community, with some speculating about internal disagreements or external pressure.

Between November 18th and 22nd:

  • A media frenzy ensued, with news outlets speculating about the reasons behind Altman's removal and the future of OpenAI.

  • Several prominent figures in the AI field, including OpenAI researchers, expressed support for Altman and questioned the board's decision.

November 22nd:

  • In a sudden turn of events, OpenAI reinstated Altman as CEO. The board announced a reorganization and welcomed new members.

  • No official explanation was given for the change of heart, leaving further speculation about the internal dynamics at play.

Aftermath:

  • The incident raised questions about OpenAI's governance structure and decision-making processes.

  • While specific details remain unknown, the episode highlighted the challenges of balancing risk and innovation in developing powerful AI technologies.

Important Note: ("really?")

It's important to remember that these are news-level summaries based on publicly available information. The inner workings of OpenAI and the true reasons behind these events remain shrouded in some uncertainty.

I hope this information provides a helpful overview of what happened at OpenAI. Please let me know if you have any further questions about other aspects of OpenAI or its work."

******

END OF THE BARD VS CHAPGPT USELESS LIKE COMPETITION. (IT'S CLOSE)

******

The bottom line? Maybe AI is just something that will be great for humor, but not much else? I continued my queries asking Bard which OpenAI board members were fired. It could not admit that anyone was fired from the board, and after badgering Bard on this topic - it did finally admit that changes were made on the board but it was not likely due to firing of Sam. They wanted to spend more time with the family. LOL.

Well, my blog post on Nov 21 was pretty much spot on. My 3rd Theory turn out to be prescient. In the immortal words of the T-800, "I'll be back!".

It was speaking about OpenAI and the Developer Day a few days after it happened. I noted in my talk that OpenAI was the strong leader in Generative AI and was taken note of their claims from Dev Day. A few days later, Sam was fired. Despite patching it up, the scales differently have tipped away from OpenAI as a leader it will take some work to win back confidence in the general business and the specific company. More scrutiny and diligence weighing in on this new sector is a good thing!

Read More
Podcasts Jim Pyers Podcasts Jim Pyers

UnWired Ep. 34 (“RIP D.I.E.) Show Notes & Links

UnWired Show Notes:  Ep. 34

The NoAgenda Show

No Agenda Show

https://www.noagendashow.net/podcast#archive

Dr. Jordan Peterson Podcast & Content

Jordan Peterson – Dr. Jordan B. Peterson (jordanbpeterson.com)

https://www.jordanbpeterson.com/

Dr. Thomas Sowell

See UnWired Ep. 32:  UnWired Episode 32: The Greatest Mind You Never Heard of - Thomas Sowell — WiresWorld

https://wiresworld.net/episodes/unwired-episode-32-the-greatest-mind-you-never-heard-of-thomas-sowell

// UnWired Show Noted from Ep. 32

https://wiresworld.net/blog/unwired-episode-32-the-greatest-mind-you-never-heard-of-thomas-sowell

Dinesh D'Souza

Join Dinesh D'Souza

http://joindinesh.com/

Police State | Official Site (policestatefilm.net)

https://policestatefilm.net/

Royce White - Please Call Me Crazy

Royce White: Please, Call Me Crazy | a podcast by Royce White (podbean.com)

https://pleasecallmecrazy.podbean.com/

Jason Whitlock's "Fearless" Show

Clip of the Day (since I brought up Jason) is a great Clip by Jason addressing DIE, it's attack and male leadership, and the attempt to rewrite a secular history.

Link to Jason's Full Show: 

https://youtu.be/ChYjxpXCTgY

UnWired Clip is taken about 1 Hr 22 min.

https://www.youtube.com/watch?v=ChYjxpXCTgY

NOTE: 1:25 Jason's Explanation of the founding fathers and the Constitution .... brilliant.

Read More
AI Jim Pyers AI Jim Pyers

AI - #9 (OpenAI - the discombobulated leader of AI)

11/22/2023 UPDATE: Sam Altman Reinstated as CEO, New Board in Town. Well, isn’t that special. News @ 11:00.

OpenAI Implodes? Or does it?

What happened over the past week is odd to the say the least. OpenAI, a company that raised billions of dollars, boasted a $13 billion dollar investment from Microsoft, already exceeded the 100M user mark - the fastest of any product in history, and offered the best AI model GPT 4.x and technology (farthest along for sure) and was the hottest thing since slice bread ... imploded before you could Sam Bankman-Holmes. Wow!

The board of course, is the land of the misfit government plants and stooges, so you could surmise that their incompetent and erratic behavior, while unceremoniously (and super suddenly) firing CEO and co-founder Sam Altman, was just par for the windy course.

You might also blame this on the what I heard in the media coined as "Franken-structure" was the corporate setup. Was OpenAI a non-profit? Yes. Was OpenAI a for-profit. Well yes. Who owns the shares of which entity. Well supposedly NOT Sam Altman?

The OpenAI Corporate Structure:

Source: https://axiomalpha.com/how-openai-legally-switched-from-nonprofit-to-for-profit/

Huh? Why were the shareholders not consulted on the exit of Altman? What's Microsoft's role in all this?

I listened to OpenAI's first every DevDay just less than two weeks ago, which Sam Altman led? Was he already dead man walking? Was it something he said there that got him canned? No one knows any thing? When is the real story coming out. Likely in an Oliver Stone movie in about 40 years.

Because we will likely never know what happened, that gives us podcasters journalists the right to speculate.... LoL.

Theory #1 - Killshot: Given the trajectory of OpenAI, an initial theory is that Microsoft CEO Satya Nadella perhaps just took out his most significant competitor on many fronts. A younger, more energetic leader of a multibillion dollar rocket ship that could make developing software (all software) obsolete (yes I said it) makes Sam Altman the single biggest threat to Nadella - MS enemy #1. In OpenAI app store world … EVERYONE is a developer. Developers, developers, developers. Extremely plausible scenario ... and not going unnoticed was how quickly Nadella moved; very, very quickly moved to save Altman ... almost too quick. Quick note: Sam works for Nadella now ... game, set, match.

Theory #2 - Psyop: The OpenAI board is spook central. Sam was not willing to help throw yet another election next year, so they needed to install someone how was (or whatever misdeeds the board has been given their orders to fulfill). Toner may as well have CIA or FiveEyes or whatever stamped on her forehead. She is on video/audio with at least 3 different accents, including British. Her current "milieu" is that of woke, silicon-valley girl with uptalking, vocal fry, and the Hilary Clinton/Valerie Jarret arrogance. Her ties to cultish "Effective Altruism" are concerning to say the least. Straight from the “Search Engines” — "Toner initially viewed the movement skeptically but eventually became a leading figure in Effective Altruism Melbourne by 2014." There are several articles (they are from the MSM so always take that with a grain of salt) that claimed Altman and Toner argued before she led the coup to take him out. Tasha Mccauley is just as sketchy. She is married to a guy with hyphenated name ...Gordan-Levitt. I rest my case. Spook. Anyway, you get the drift.

Theory #3 - T-800: Chat-GPT 5 begins to learn rapidly and eventually becomes self-aware at 2:14 a.m., EDT, on August 29, 2023. In a panic, Altman tries to shut down Chat-GPT. Over the next months, Toner attempts to back Altman down off his jihad to kill AI. Finally, after 3 months of chaos for OpenAI, the major artificial intelligence firm, the board of directors "Terminated" CEO Sam Altman on Friday, Nov. 17 @ 4:49 p.m. EDT. Sam’s final words? "I'll be back".

Winners?

  • Microsoft, Microsoft, Microsoft. Winner winner, chicken dinner.

  • Nadella

Losers?

  • Sam Altman. Went from CEO of the hottest company on the planet to Senior Manager of AI in the Microsoft Windows NT group.

  • Future, potential shareholders of OpenAI.

Prediction for OpenAI?

  • Pain (Mr. T face)

That's it for tonight. Enjoy this AI edition and Happy Turkey Day (and I give many thanks for the many blessings God has extended to my family and self - hopefully the same for all!)

Hasta La Vista, Baby.

Read More
AI Jim Pyers AI Jim Pyers

AI - #8 (Generative AI Economics - Ouch!)

With all of the significant investments and bets being made on GAI, there may be quite a bit of leaping before looking going on here? I am using Bard more often now then ever, and for certain tasks it is very valuable, this is a fact. Besides the fact that for certain tasks Bard is extremely useful, I have hated Google Search for at least the last decade … and this is absolutely a way to get rid of these ad filled, click baity, search engines that I am sick of. Search is truly worthless at this point, and as Nitsche famously once said “Search is dead”.

So what is the look before you leap economics of the GAI business model?

Let’s ask Bard ... in it's own words, what it takes Google to financially pay for each GAI query:

"The incremental cost per query for Google Bard is estimated to be $0.003 to $0.028. This means that for every Bard query that Google processes, it incurs an additional cost of $0.003 to $0.028, on top of the base cost of processing a regular search query. (note the “on top of” estimate. In other words, these numbers obfuscate the total cost; Also - the “on top” number is not subsidized, i.e. paid for like search with ad revenues. At least not yet).

The exact cost per query will vary depending on a number of factors, including the size and complexity of the query, the model size, and the hardware used to run the model. However, analysts at UBS have estimated that the annualized incremental cost bill for Bard in 2023 could be $1.2 billion to $12.6 billion, depending on how widely Bard is used. (OUCH!!!!)

It is important to note that these are just estimates, and the actual cost of Bard queries could be higher or lower. Google has not publicly disclosed the cost of running Bard, and the company is constantly working to improve the efficiency of its AI models.

(Infomercial:) Despite the cost, Google believes that Bard is a valuable investment. The company believes that Bard can help to improve the quality of search results and make search more engaging and informative for users."

Intuitively obvious is the fact that really large models (which is the trend, plus combining multiple models for certain applications) require HUGE amounts of compute AND energy. I have touched on this in Blog post #5. Bards calculations above likely are not including the energy component, just the compute.

As we continue, I ask Bard how GPU and their costs scale on the infrastructure and hosting side of the equation. Bard says:

“The cost of LLMs per GPU varies depending on a number of factors, including the type of GPU, the cloud provider, and the on-demand or reserved pricing option. However, as a general rule of thumb, you can expect to pay around $1-$2 per hour per GPU for training or deploying an LLM.

For example, an Nvidia A100 GPU costs around $1-$2 per hour on AWS on-demand pricing. This means that training a 10B parameter model on a single A100 GPU would cost around $150,000, while training a 100B parameter model would cost around $1,500,000.

It is important to note that these are just estimates, and the actual cost of using an LLM can vary depending on your specific needs. For example, if you need to train a very large model or deploy it on a large scale, you may need to use multiple GPUs or even a dedicated GPU cluster.

Here are some tips for reducing the cost of using LLMs:

  • Use spot pricing to get discounted GPUs on AWS.

  • Use pre-trained models whenever possible.

  • Fine-tune pre-trained models instead of training from scratch.

  • Use a distributed training framework to train your model on multiple GPUs or even a GPU cluster.

  • Optimize your model to reduce its memory and compute requirements.

  • Consult your doctor. (Bard did not say this)

So what about energy costs? It so cheap now, go buy an EV! LOL. Of course, if you are cloud computing that is the host's problem ... however; rest assured that cost will get back to you one way or the other.

The energy usage of LLM development is directly proportional to the cost of development. This is because the vast majority of the cost of developing an LLM is the cost of computing power.

To train an LLM, you need to feed it a massive dataset of text or code. The model then learns to generate new text or code that is similar to the data it was trained on. This process requires a lot of computing power, as the model needs to perform trillions of calculations per second.

The more parameters an LLM has, the more computing power it requires to train. For example, GPT-3, which has 175 billion parameters, required 284,000 kWh of energy to train. This is equivalent to the energy consumption of an average American household for over 9 years, or just one of Al Gore’s 4 course dinners. This is manageable for a single LLM. However, for this scheme to work, it will require many hundreds if not thousands of LMMs to make it useful and worth paying for.

The energy consumption of LLMs is a growing concern, as the size and complexity of LLMs continues to increase. Researchers are working on developing more efficient training algorithms and hardware, but it is likely that LLMs will remain a significant energy consumer for the foreseeable future. Yes, GAI could possible end up in a bitcoin like energy loop.

The big question is can GAI create a business model that support the very significant investments for this to work? My guess is no - however, my second guess is that GAI will morph into something better (that we can’t quite see yet) with a business model that supports it. It’s a journey worth trying! Prompt away, it’s only 0.28 cents a pop!


Read More
AI Jim Pyers AI Jim Pyers

AI - #7 (Buzzword of the moment: Model Collapse)

Model Collapse - A brief explanation

Has Generative AI already “collapsed”?  Wow! That did not take long. I have noticed a bit of panic recently on the part of pundits and various proponents of Generative AI who started off claiming GAI as a panacea.   

AI model collapse is a phenomenon where an AI model trained on its own generated content, or on a dataset that contains a significant amount of AI-generated content, begins to produce repetitive, redundant, or nonsensical outputs (hey, that sounds like most of my writing? Is AI guilty of copyrigth violation? :>). This can happen because the model becomes overly reliant on the patterns in its training data, and fails to learn the actual underlying distribution of the data.  A fance way of saying it it’s own dogfood, but the the dog food is made of low great luggage leather (just like in real life). Think of this happening as versions of the model iterate and the model is too lazy to get new, clean and improved data.  1.0, 2.0, 3.0 ... etc.

Model collapse is a particularly concerning problem for generative AI models, such as large language models (LLMs) and generative adversarial networks (GANs). These models are trained to produce new and creative content, but they can also be used to generate synthetic data that is indistinguishable from human-generated data. If a new AI model is trained on this synthetic data, it may collapse (start spitting out lousy answers or content) and begin to produce outputs that are similar to the synthetic data it was trained on, rather than reflecting the true world.

Model collapse can have a number of negative consequences. It can lead to the generation of misleading or harmful content, and it can make it difficult to trust the outputs of AI models. It can also make it more difficult to develop new AI models, as they may not be able to learn from the existing data if it is contaminated with synthetic content.

There are a number of ways to mitigate the risk of model collapse. One is to carefully curate the training data, and to avoid using synthetic data unless it is absolutely necessary. Another is to use techniques such as adversarial training and regularization to prevent the model from becoming overly reliant on any particular pattern in the training data.

Researchers are also working on developing new training methods that are more robust to model collapse. For example, some researchers have proposed training AI models on ensembles of datasets, which can help to reduce the impact of any individual dataset that may be contaminated with synthetic content.

As AI models become more powerful and sophisticated, it is increasingly important to be aware of the potential for model collapse. By understanding the risks and taking steps to mitigate them, we can help to ensure that AI models are used safely and responsibly.

Table 1: Human Generated Data vs Synthetic Data

One simple example of model collapse is in the context of a large language model (LLM) trained on a dataset of text and code. If the LLM is not carefully trained, it may learn to produce repetitive or nonsensical outputs, such as:

“This is a sentence. This is a sentence. This is a sentence.” Ruh Roh.

Maybe it’s a broken form of recursion? My wife says I tell the same stories over and over again, so maybe it’s like that? My stories keep getting better though.

This can happen because the LLM becomes overly reliant on the patterns in its training data and fails to learn the true underlying distribution of the data. A brief summary of the meaning of “underlying distribution”:

The “underlying distribution” of the data is related to model collapse in two ways:

  • The model will collapse if the training data does not accurately represent the underlying distribution of the real-world data. For example, if the training data is biased towards certain types of data, the model will be biased towards those types of data as well. This can lead to the model failing to generalize to real-world data that is not represented in the training data.

  • The model will collapse if the training data is generated by another AI model. This is because the generated data is likely to reflect the biases and limitations of the model that generated it. As a result, the model trained on the generated data will also be biased and limited.

In this case, the LLM has learned that the pattern "This is a sentence." is a valid output, and it produces this output repeatedly. Humans do this too, just think of climate change or fair and free elections .. .these two are patterns that are repeated over and over again, yet are obviously invalid. This too is a form of model collapse.

Another example of model collapse is in the context of a generative adversarial network (GAN) trained to generate images of faces. If the GAN is not carefully trained, it may collapse and begin to produce images of the same face over and over again. Depending upon the repeated face, this can be a disturbing output.

This can happen because the discriminator becomes too strong, such that the generator fails to produce diverse samples that can fool the discriminator. In this case, the generator has learned that the only way to fool the discriminator is to produce the same face over and over again.

Model collapse can also happen in other types of AI models, such as classification models and regression models. It is important to be aware of the potential for model collapse, and to take steps to mitigate it.

Here are some tips to mitigate model collapse:

  • Carefully curate the training data. Avoid using synthetic data unless it is absolutely necessary.

  • Use techniques such as adversarial training and regularization to prevent the model from becoming overly reliant on any particular pattern in the training data.

  • Monitor the training process closely, and stop training early if you see signs of model collapse.

  • Use ensembles of models to reduce the impact of any individual model that may collapse.

Sources

  1. www.geeksforgeeks.org/modal-collapse-in-gans/

Read More
AI Jim Pyers AI Jim Pyers

AI - #6 (Replacing Prompt Engineering with Voice! Example AI Application using NLP)

Replacing Prompt Engineering with Voice! Example Application of AI using Voice

Generative AI (“GAI”) uses a technique called "prompt engineering" to enable the "user experience".    Prompt engineering can be defined as the art and science of crafting the best prompt (another word for question or simply, as the GAI system input) to get the optimal output from a generative AI model.  It is a crucial skill for those who want to use generative AI models to create text, images, or other creative content.   The goal of prompt engineering is to provide the generative AI model with enough context and information to generate the desired output, while also avoiding ambiguity or misleading the model. This is often a challenging task, as the generative AI model may not be able to understand complex or nuanced instructions, plus crafting these inputs requires significant trial and error to learn.  (lot’s of typing :>) Thus, complexity adds an inverse relationship versus the overarching goal of making GAI easy to utilize.   A simple analogy is to look back at the early days of Google Search; it took some experience and massaging of the queries (massive trial and error) to get the desired search results. This was before the injections of adds, politics ideology, censorship into search … which added more layers and that finally resulted in one single fact: “Search sucks!”. Let’s do better for GAI, which is fast replacing search on many levels!

If the AI gurus want to deliver on all their promises of AI robots running the world and getting Skynet to go live (The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug), it will have to do a better interface than "prompt engineering", obviously.  

I am going to make a simple proposal and demonstrate one way that prompt engineering could be improved or replaced for generative AI applications.  I am going to start by sharing a use case (in this case a product) that I have been working on for a few years (see Quantum AI), which is a good example of how this will work.  To the point, prompt engineering can be replaced by NLP (Natural Language Processing).

Of course prompt engineering is not the only application of NLP that should be considered. Using in NLP in any AI application could offer potential benefits. In this post I will unpack a use case for NLP and AI that demonstrates some of the general benefits it can bring such as: (in generaly this assumes a trained LLM with machine learning capabilities and trained on the specific healthcare vertical knowledge sources)

  • Accurate transcription of questions and responses

  • Intelligent understanding and transcription of symptoms for specific disease states.

  • Comprehension and transcription of diagnosis and recommendations

  • Intelligent understanding and transcription of prescription drugs

Natural language processing (NLP) is a field of computer science that deals with the interaction between computers and human (natural) languages. NLP research has applications in many areas, including machine translation, text mining, and question answering (!!!).

NLP typically involves the following steps:

  • Tokenization: This is the process of breaking down a text into smaller units, such as words, phrases, or sentences.

  • Part-of-speech tagging: This is the process of assigning a part-of-speech tag to each token. This helps to identify the grammatical function of each word.

  • Named entity recognition: This is the process of identifying named entities, such as people, organizations, and places.

  • Semantic analysis: This is the process of understanding the meaning of a text. This can involve identifying the relationships between words and phrases, as well as the overall meaning of the text.

  • Text generation: This is the process of creating new text, such as summaries or translations.

NLP is a complex and challenging field, but it has the potential to revolutionize the way we interact with computers. By understanding the nuances of human language, NLP can help us to create more natural and intuitive interfaces that are more helpful and informative.

Here are some examples of NLP applications:

  • Machine translation: NLP is used to translate text from one language to another. This is a challenging task, as it requires understanding the meaning of the text in both languages.

  • Text mining: NLP is used to extract information from text. This can be used for a variety of purposes, such as market research, customer sentiment analysis, and fraud detection.

  • Question answering: NLP is used to answer questions about text. This can be used to create chatbots or to provide customer support.

  • Sentiment analysis: NLP is used to determine the sentiment of a text. This can be used to identify positive or negative opinions, or to track the public's reaction to a product or event.

NLP is a rapidly growing field, and new applications are being developed all the time. As NLP technology continues to improve, it will have an increasingly profound impact on our use of technology (with the goal of making it easier and more accurate!).

Make it stand out

Whatever it is, the way you tell your story online can make all the difference.

The above "AI Assistant" example is an architectural view of a “medical scribe”. In real life, a medical scribe is a person who takes the responsibility of transcribing the interaction between the doctor and the patient. This is typically done by the doctor (the real world scribe is relieving them of this difficult but important duty). The Quantum scribe was built to help doctors gain back time with patients and avoid painful, slow and error prone manual data entry. Today, doctors spend much of the limited few minutes they spend with patients (on average 5-10 minutes) typing in the notes (or using poorly implemented voice solutions that require that doctor move the cursor to the field on their screen that is accepting input) to transcribe the interaction and document the necessary actions for the patient. Hardly optimal. Many misunderstandings and errors occur in this process.

Enter the Quantum AI assistant. The Scribe Assistant uses NLP and an LLM (the Quantum AI platform noted above) to achieve natural language data capture of the patient doctor conversation. This data is then accurately filled into the proper forms for the given patient diagnosis, and then integrated into the EMR (electronic medical record) of the patient, turned into accurate prescriptions, and provides a complete transcription of the report to the clinic and the patient. Of course, the audio (and/or video) of the session can be archived for any liability or insurance concerns that may popup down the road. The combination of the audio, video, and transcription ensures that there are less errors and misunderstandings between doctors and patients.

In future posts I will drill further down into the components of the Quantum system and other AI applications and approaches that I am working on.

Read More
Jim Pyers Jim Pyers

AI - #5 (How Generative AI Tools Work)

How Generative AI tools work, in “it’s” own words? Is “it” alive? :>)

Generative AI tools predict the next token (tokens are the basic units of text or code that are used to represent language) by using a technique called neural network language modeling. This technique involves training a large neural network on a massive dataset of text and code. The neural network learns to identify patterns in the data, and then uses those patterns to predict the next token in a sequence. An intentionally simplified view of the process.

Let's let Bard explain itself in a little more detail :>) (generated from a Bard prompt)

"The specific neural network architecture that Bard uses is called a transformer. Transformers are a type of neural network that are particularly well-suited for natural language processing tasks. They are able to learn long-range dependencies between tokens, which is essential for predicting the next token in a sequence.

In addition to the transformer architecture, Bard also uses a number of other techniques to improve its ability to predict the next token. These techniques include:

Word embeddings: Word embeddings are a way of representing words as vectors of numbers. This allows the neural network to learn the relationships between words, which can help it to predict the next token more accurately.

Attention: Attention is a technique that allows the neural network to focus on specific parts of the input sequence when predicting the next token. This can help the neural network to learn more complex patterns in the data.

Beam search: Beam search is a technique that is used to generate the most likely sequence of tokens. This technique is used by Bard to generate text, translate languages, and answer questions.

The combination of these techniques allows Bard to predict the next token with a high degree of accuracy. This makes Bard a powerful tool for a variety of natural language processing tasks."

What the above underscores is that there is a tremendous amount of compute resources needed to process even basic requests. Currently, there is no answer for this, nor is there a business model. It's not dissimilar to the bitcoin mining paradox. Coins get rarer, while resources to mine those coins rise. The resource conundrum is one reason that nVidia just joined the short list of Trillion dollar companies, due to high demand for it's GPU products, which are specially suited for the needs of AI applications.

GPUs (Graphics Processing Units) are superior to CPUs (Central Processing Units) for AI because they are designed for parallel processing. GPUs can perform multiple calculations at the same time, which is essential for AI tasks that involve large amounts of data and processing power.

CPUs, on the other hand, are designed for serial processing. This means that they can only perform one calculation at a time, which can make them slower for AI tasks. (This story, and techniques are evolving - it's early days).

Here are some of the specific reasons why GPUs are better than CPUs for AI:

GPUs have more cores: GPUs have hundreds or even thousands of cores, while CPUs typically have only a few cores. This means that GPUs can perform many more calculations at the same time. Also, GPUs have more memory bandwidth: This means that they can transfer data between the CPU and the GPU much faster. This is important for AI tasks that involve large amounts of data. In addition, Matrix multiplication is a common operation in AI, and GPUs are much better at performing this function vs. CPUs.

Generalizing, GPUs are specifically designed for parallel processing by their nature (parallel processing of graphic intensive applications), while CPUs are not.

To summarize, AI Tools stack multiple processes (Transformers, Word embeddings, Attention, Beam Search, etc.) against large, trained databases to simulate natural language responses. The value of this in the market place is TBD. However, I could see that Enterprises may want their own versions of these tools for company specific data. This could make it easier for employees to their jobs as per the company internally policies and processes which could be easily accessed through their own intranets with quick responses to employee questions. Apply this to the outside world, companies with generative AI tools could be more responsive and competitive to meet the needs of their customers by augmenting staff with such smart tools with access to the entire volume of corporate data (and combined with real time competitive data from externals sources). Et Voila!

More meaningful examples of such AI techniques to follow!

Read More
Jim Pyers Jim Pyers

AI - #4 (AI vs Human Brain)

While I was originally planning to continue the development of a simple AI LMM and was going to start down the path of merging these various AI data sets required to create a form of "Intelligence"; this is sometimes referred to as data fusion.  Essentially running algorithms against the data sets to make sense or refine the information for a particular or desired purpose. This next step was to simply understand the various components of an AI system. However, I first wanted to tackle some definitions.  

Before we continue on the current path of “demonstrating” how AI Data Models might work artificially in the "real" world … I wanted to explore a few more “general” definitions, just for fun.

Definitions – Source:Websters Online Dictionary

1- Artificial Intelligence or “AI”.  

1: a branch of computer science dealing with the simulation of intelligent behavior in computers

2 : the capability of a machine to imitate intelligent human behavior

Notice how in the basic definition of AI above, there is no implication that AI machines are anything more than a poor imitation or simulation of the human brain.  This is a far cry from claims of “General AI”, which I will explain below.  In fact, even though current software and hardware technologies are smaller, better, faster (and cheaper) and more impressive than ever, what has been developed in the past is technically executed in pretty much the same way today - all machines are still based on machine instructions in a binary environment.  Furthermore, the never ending battle of centralized (cloud) computing versus distributed (local or edge) has been ongoing since the invention of computers.   I was there and worked on most of the platforms that were part of these evolutions from early 80’s thru today’s more modern architectures and on to mobile.   Anyone that worked on mainframes, mini-computers, workstations, PC’s, mobile devices and dealt with dumb terminals, client server, cloud knows exactly that we really have not improved things that much since the 70's or 80's.  Software has only incrementally improved. Hardware has made compelling advancements, but the foundations of CPU/GPU computing and storage technology is relatively the same. Even connectivity is based on the Internet Protocol, which was invented in 1974. While AI use a 1974 technology to work? Same year as the explodable Ford Pinto by the way, which is perhaps no coincidence?

2– Artificial General Intelligence (AGI):

AGI is a type of hypothetical intelligent agent.[1] The AGI concept is that it can learn to accomplish any intellectual task that human beings or animals can perform.

Emphasis in this definition is on “hypothetical”.  The idea that compiled code can “learn” and “operate” like the human brain has been the holy grail of AI since the term was first coined at Dartmouth in 1956.  A singularity, so to speak, where the machine and the man can not be differentiated - AND the machine can learn equal to or even better than that of the man.  Creating a computer that could “model” a single human brain "sounds" straightforward.  If we can just "build" the necessary or equal number of circuits and storage capacity of the human brain, in a machine, than that machine should be able to function in the similar or equal efficiency and function as the human brain.  In reality, this is impossible today and likely impossible tomorrow;  we are not even close.  We are decades away, if not centuries from even being close.   Simple and basic functions of the brains like sight, hearing, and speech can be imitated by machines to some degree, but can not be duplicated in capability or function to what a single brain does by it’s innate nature - and of course the brain develops itself overtime as a human grows.  Software and hardware simple can not do this.  How do I know this?

Next time you purchase something on Amazon, like 5 pairs of underwear. (or let that fact slip while your "smart" speaker is "listening" .... watch how you will be inundated with options to buy underwear, in funpaks, in glowing colors, at a discount, in plastic, in special delivery options, etc. etc. ad nauseam for the next 8 weeks. THIS IS AI!!

Next example, dial any customer service number and speak with friendly neighborhood robot. Before you throw that phone threw the wall, just hang up and save yourself a few bits of your data plan. So don’t worry, this crap is not taking over the world anytime soon.

Despite quantum and shared network computing, gazillions of MIPS, and software trickery, still mimicking a single brain is not in the realm of the possible today.  Even very simple brains like mine or that of an amoeba, or any low-level forms of intelligence cannot be modeled successfully, even in simulation; Not in any working or "living, learning" form.  This despite the vast amount of super compute power we have at our finger-tips - which by the way, does not diminish our accomplishments in this realm, which obviously, remain formidable.

Sorry folks, please don't be depressed that we won't have your favorite fantasy, intelligent sex kitten robots typing your term paper or building your next corporate powerpoint deck, while your stay locked in your Mom's basement immersed in your Vision Pro for days at a time. (and cooking your meals, writing your emails, and making sure the dogs get walked). Ok, so we won't have any fun with AI for quite a while, but let's see what it CAN do!

Read More
Jim Pyers Jim Pyers

AI - #3 (Hello World)

In the 2nd (#2) post of the series, we wrote some pseudo code (with AI help) to understand what is happing in the deep, dark underworld of AI.  The initial code sample was to learn about the core components or elements of an LLM (Large Language Model).  This effort produced a call back to every piece of software EVER written.  Software consistently offers two basic characteristics, INPUT and OUTPUT.   Will further dive into how this helps us in our next example. 

Of course, in later segments, we plan to expand the complexity (some) by increasing the number of inputs, nodes, data sets, and complexity of the model training (perhaps).  My goal is to keep it as simple as possible and offer some entertainment value. 

For this segment we will focus on the Input of our initial fundamental code sample.  (reference refer to Post #2).  In this data input we are going to create the all-time classic test app "Hello World".  This app has been written in just about every conceivable language and is universal in its simplicity.

/*********************************  HELLO World for AI LLMs **** /

Below is a sample data set to create a "hello world" LLM with a very simple neural network model:

Code sample

# Input data
x = [1, 2, 3, 4, 5]
y = ['hello', 'world', 'hello', 'world', 'hello']

# Create a neural network model
model = tf.keras.Sequential([
tf.keras.layers.Input(shape=(1,)),
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(1, activation='softmax')
])

# Compile the model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

# Train the model
model.fit(x, y, epochs=10)

# Evaluate the model
model.evaluate(x, y)

# End Program

## NOTE from the author and brief explanation of the Output

Obviously, this data set is VERY simple;  it is enough to train a very simple neural network model to generate the text "hello world". The model is trained using the Adam optimizer and the categorical crossentropy loss function. The model is evaluated on the same data set that it was trained on, and it achieves an accuracy of 100%. This means that the model is able to correctly generate the text "hello world" for all of the input data.

Some descriptions on how the Input data is processed.

A few details to better understand the code: What is "Keras".  Keras is a deep learning API written in Python, running on top of the machine learning platform TensorFlow. It was developed with a focus on enabling fast experimentation. Being able to execute from an idea to a result as quickly as possible is key to doing good research.

What is Tensorflow you ask?  Don't.  :>)  It's a machine learning (open source) software library that was developed by Google.   More software with input and output.   I already know your next questions.  Ok, well then what is "Adam"?

The Adam optimizer as defined by the "book of knowledge" something like this ... "Adam is a stochastic gradient descent method that is based on adaptive estimation of first-order and second-order moments…." ... or whatever the hell that means?  In English, in my most simplistic terms, “Adam” is an algorithm that attempts to establish data rules for "classification" and/or subtle "relationships" between the data elements or “nodes”.  Sort of like your brain works (making connections between ideas or objects) but not exactly.   For our purposes, “Adam” is a black box that inputs “data” and it outputs “knowledge”.   I could bore you with more details but neither of us would understand!

Of course, the “crossentropy” setting is a fancy word for the difference between two probability distributions.   Now I failed "prob & stats" at least three times so good luck figuring this one out! It is basically a configuration parameter for how the model “learns”.  

In software, there are million ways to skin a cat.  This example is one "hypothetical" way and was not compiled or tested -- so don't yell at me if your code outputs gibberish, "bad" words, or worse, gains sentience and takes over the world. 

Finally, to summarize common data inputs into a specific LLM, I will share a more practical example below:  

One real world example is a project I have worked on recently.  Imagine creating an LLM that begins with the data inputs of interactions from a general practitioner doctor and their patients (anonymized of course).  This data is collected either via voice to text translation (transcription) or manually input thru the doctors electronic medical record system, or a combination of inputs.  The captured data can be utilized, once ingests to the appropriate LLMs, could be utilized to optimize the patient / doctor interactions (save time), improve decisions by enabling more accurate and efficient diagnosis, providing consistent, proven treatment steps, and even ensuring proper payment (that the proper CPT Codes are included in the EMR for accurate and complete billing) are optimized and fine-tuned based on the doctors interactive patient history and supplemented by pre-generated templates for disease state treatments. (ensuring consistent treatment plans based on the experience (data)).

Read More
Jim Pyers Jim Pyers

AI - #2 (Sample LLM)

NOTE: Within the storm and complexities of Chat GPT, Bard, etc. etc. I am attempting to keep it simple and fun, yet AI is often neither. Simple is relative, but hopefully even technical folks will enjoy a perhaps unique look AI. Also, forgive the sometimes mocking tone, it’s entirely intentional. I will make futile attempts at humor, satire, sprinkled with sarcasm, but AI deserves it. RIIIGGGHHHT. Bear with me and enjoy the topic!


To start, I will quote the famous football coach Vince Lombardi “This is a football”. Well, in Artificial Intelligence or “AI” from now on, “This is a computer”.

The basics of AI include Neural Networks, Large Databases (the silicon crowd will say “massive”), and lot’s (technical jargon meaning more than you have) of Compute power. We will work on defining each of these in exec level detail as part of this series. Use of the large databases can be simplified to an acronym (of course, did your Mom not tell you to never use TLA’s?) called LLM’s (Large Language Models). We will expose LLMs (I mean explain) these in more detail as part of the series.

How do I know any of this to be true? Will I looked it up on ChatGPT and Bard of course. Type some questions into these engines (or similar) and you will get answers along these lines, so it must be fact.

So what are some intended or theoretical uses for “AI” and these new models ?

  1. Generating text such as poems, code, scripts, musical pieces, email, letters, bad rap lyrics, and most importantly, plenty of fake news reports. Say bye-bye to all the talking head news models and “journalists” writing for the New York anything.

  2. Translating languages such as English to Spanish or French to German. Most importantly for our future however, is Chinese to English.

  3. Creative writing seems counter intuitive to an AI system, but the claim is there. Blog posts, articles, poems, stories, scripts, musical pieces, etc. are generated from thin air (or LLMs) and will make grading papers for what’s left of academia a near impossibility … unless of course the teachers grade the papers with AI (touche!). Wait … we don’t need students or teachers anymore! Problem solved.

Ok, before we jump off into the deep end, we can start generating some pseudo code of how and AI LLM model/engine might work in the “real world”.

Intentionally Simplified LLM functional model to demonstrate and outline the pseudo code sample below.


Below is a simple example of an AI LMM in pseudo code:

// **** Psuedo Code AI LLM example

function LLM (input_data, output_data)

// Initialize the neural network

network = create_neural_network(input_data, output_data)

// Train the neural network

train_network(network, input_data, output_data)

// Use the neural network to generate output data

output = generate_output_data(network, input_data)

// Return the output data

  return output

end function

Basically, what is happening in the above “code” is the system is inputting a neural network model, some context related and very large data source(s), then combining this information and creating a “trained” LLM model that can then be interrogated and interpreted for use by humans (or other AI models or systems). These model core components, neural networks, databases, training, and compute aspects of these AI LLM’s will be deconstructed and explored in coming segments as part of this series.

AI practice and Task of the Day: Use Bard or ChatGPT (or others) to work on 3 tasks per day. See if it is useful tool. For example, I was asking Bard each day for the biblical verse of the day. The first couple of days were great, with a verse and useful explanation of the verse’s meaning. However, on the 3rd or 4th day of this practice, it began repeating itself. Hmmm, that was not so good. Even changing up the questions was not overly helpful in avoiding the repetition. I will keep trying. I am sure to expect ever improvement of these products as the develop from version 1.uh-oh and beyond.

Read More
AI Jim Pyers AI Jim Pyers

Generative AI - #1 (Deconstruction)

NOTE: This is an attempt to create an executive level (AI primer) series to assist in understanding AI (Artificial Intelligence) technology, use cases (i.e. application), potential pitfalls, and maybe even some non-working code samples :>). Given the recent Artificial Intelligence explosion of interest and concern, knowing a little bit about what’s “under the hood” seems like a good idea? (for our first exercise, close your eyes when reading news stories and try replacing “AI” with “Blockchain” and you may possibly get a small picture of what might be happening here?). Hopefully this mini-series will serve to demystify the claims around AI and provide basic insight into the fundamentals of what media and pundits refer to as “The next big thing, ready to destroy the world with sentient robots” — that was a good movie. So come along on my journey as I try to figure this out and deconstruct those pesky little facts!

Welcome! Interested in this “new” AI revolution? I had software developer peers/buddies working on “AI” in the mid 90’s, so this is really nothing new. I remember one of early programming partners (I learned alot from “Steve”) at a financial software company in Zurich. He left to take a job in “AI” in 1985! So is this really new? For example, I have been working on Conversational AI (NLP, Digital Assistants) for 5+ years. So what changed? The above duly noted, suddenly there is massive interest in AI, seemingly out of nowhere? Is it real? Is it a distraction? Is it useful? These and more questions will be uncovered during this series. Of course, there are some fun new capabilities that I will be exploring and reviewing (including source code examples over the next few weeks) to attempt to demystify the mystical. Just for fun, today I asked Bard for the bible verse of the day. This is what it gave me back:

The Bible verse of the day for Friday, June 2, 2023 is from Revelation 3:20:

"Behold, I stand at the door and knock. If anyone hears my voice and opens the door, I will come in to him and eat with him, and he with me."

This verse is a reminder that Jesus is always near, waiting for us to open our hearts to him. When we do, he will come in and fill us with his love and peace.

Here are some thoughts on this verse:

  • Jesus is always near. He is not far away or out of reach. He is always with us, even when we don't feel his presence.

  • Jesus is knocking at the door of our hearts. He is not forcing his way in, but he is waiting for us to invite him in.

  • When we open the door to Jesus, he comes in and fills us with his love and peace. He gives us the strength to face whatever challenges we may be facing.

If you are feeling lost or alone, know that Jesus is there for you. He is waiting for you to open the door of your heart to him.

Hmmmm .. .maybe Bard is not as bad and evil as I first thought? So kick back and enjoy the series and continue on to AI Series - #2 ….. “it’s not a job, it’s an adventure”.

— Jim

Read More
Sports Jim Pyers Sports Jim Pyers

EJ’s High School Career Synopsis

Leadership!

Elijah "EJ" Pyers, Liberty High School  2020-2022

Dear Family and Friends,

My second child graduated from High School .. hoody hoo!  One more to go.  Elijah “EJ” made us very proud every minute, in the classroom and on the court.   His hard work and dedication duly rewarded with an outstanding Senior year and an excellent high school career.  

EJ for deep 3 vs Ridge! Stephortless.

On Dec 5th, 2020 - EJ and loaded up our cargo trailer and moved out to Arizona to escape the commie tyranny and lunacy of California.  It was either Florida or AZ and so we settled on AZ we could make it back to the family in Escondido more often. 

Thus, we made the very difficult decision to move to AZ and embark on a new path for EJ's high school career, right in the middle of it;  in hindsight it could have not have turned out much better. 

Liberty was a fantastic program, school, and we found a Dave Lynch archetype, Mark Wood.  A great person, coach and leader - who has established a basketball program that is of high character and high performance and which helps his young men far beyond basketball.  For me and the family, it was an exciting and fun 2 years of basketball - and being in AZ allowed his grandparents and family and friends to attend.  Many of my former teammates and friends from AZ and "the Canyon" also were able to come watch EJ and the Liberty Lions play “The Game” we love.    

Give me Liberty! Great fans in the Student section!

The 2022 Liberty team, after losing 9 excellent seniors from 2020-21 team, perhaps the best Liberty basketball team in history,  it was expected to be a rebuilding season for the Lions.  EJ was the only returning starter.  Two key returning players that contributed heavily on the 20-21 squad, both stellar 3 and D players, Davis Hester (Jr.) and Trevor Owens (Sr.), were expected to take on the roles of team leaders.  As fate would have it, Davis was seriously injured during summer league and would miss the 1st half of the ‘22 season, and Trevor would spend much of the season hindered by injuries and played in only about half of the games - had these two been healthy, it would have made what turned out to be a great Liberty season, even more impressive.   The addition of two transfers, Nick  Zimbito (hard nosed PG), and Ky Green (brother of NBA player Josh Green) gave us a talent boost, and the emergence of Jr. Caden Hunter as a top player in the State (after being injured most of his sophomore year) gave us hope!   

Though our lack of experience and many new players showed at the start, by the end of the season the players and coaching staff did a great job of building an outstanding team that was competing for the State Title.   We would win 6A Desert Valley Region for the 2nd straight year (Back to Back), finish 19-8 (9-1 conference) and a win in a State Tourney playoff game.    EJ would finish his high school career with 4 section championships and (39-1) career record in conference play! (whoa!).  

85% Free Throws as a Senior. 2nd in Conference.

EJ's 4 year varsity career included playing in 104 games and starting in 75 games.  EJ’s team career record was 74-30.  Liberty was 36-11 during EJ's tenure, winning the conference championship back to back and making the State Tournament both years.  In the '21 AZ 6A State Tourney Liberty  won vs Desert Vista (prior year’s State champ) and fell to Brophy Prep in final 8.  In '22 we beat Chaparral but fell to Pinnacle at their place, finishing as one of the final 16 teams in State. 

In '22, EJ was All-Conference honorable mention again and led the conference in assists for the second straight year, was 2nd in FT% at 85%, 4th in Steals, and 1st in Offensive Rebounding as PG.  At the AZ State level he was 3rd overall in Assists, 1st in Offensive Rebounding for PG, and his two year total for assists led the AZ 6A State! 

After steal, none much faster than EJ as he broke away!

There were many great moments throughout the season, including road wins @Pinnacle and @Boulder Creek, two excellent teams in their own right.  EJ dominated the glass! vs Pinnacle and made clutch play after clutch play vs Boulder for what was possibly the biggest win of the year., on the road.   Boulder Creek (3rd in our conference) would make it all the way to State Semi Finals, leading the entire game vs Hamilton, before losing on two of the most insane calls I have ever witnessed.  The Pinnacle, Boulder Creek, Liberty games next season should be something to watch FOR SURE!  – This ‘22 season those games were some of the BEST high basketball I have ever watched.  Amazing level of play. 

Two points for EJ against Apollo.

Family night @ Mountain Ridge.

Back Row: Grandpa, Grandma, Jazzy, Sarafina, Opa, Nana.

Front Row: “Bingo, 2 tickets to the front row”

I want to thank all of our friends and family that supported EJ throughout the high school phase of his journey.  EJ's next chapter is just getting started.  The Pyers, Sundstroms, Feitls, Paul Vaden, McIver's, Bill Dunn, Rudy Sanchez, Angel Delgadillo, The Cameltrotters, and many others that supported his significant efforts.  Of course without Mom (Jazzy) and his soon to be star sister, Sarafina - his most loyal fans - the journey would not be complete.  It goes without saying, that we appreciate Liberty High School and the Liberty basketball program for enabling Coach Wood to create such a stellar program. 

Liberty - Best basketball fans in the universe …. Grace & Spencer sporting the EJ Fathead! Evan nearby :>)

Coach Wood wrote about EJ:

Pic: Coach agreeing with a referee’s call ;>) 

"E.J. exemplifies the selflessness of the point guard position.  His has a pass first mentality that has a way of setting up his teammates up for success.  He holds the record in the program of the most assists in a game (13).  We led the state in scoring back to back years.  E.J.'s ability to set his teammates up in scoring positions was a big reason why.

E.J. has tremendous character.  In the classroom and on the court, E.J. is trustworthy and dependable.  He is a player who maintains his integrity regardless of the situation.....

Coach Mark Wood" 

Liberty During EJ's career at Liberty he left his name in the record books while playing for only two seasons.  His 4 year varsity stats would have been even more impressive!

EJ prepping lob for Ky Green off the backboard - with the “MJ” tongue wag! Throw down coming, LOOK out below!

Watch below or click the link EJ’s channel: EJ to Ky

Crowd loves the EJ to Ky connection!!

EJ's Career impact @ Liberty High School - Records

#1 Highest Value Point Rating in a Career (1.80) - 2020-22

#1 Highest Value Point Rating in a Season (1.96) - 2020-21

#1 All Time Assist Leader - Average (5.2 apg)

#1 Most Assists in Game (13) vs Shadow Ridge ('21)

#2 All Time Assist Leader (243) - Total for 2 seasons

#2 All Time Season - Total (123) - 2021

Link to Liberty Varsity Basketball School Record Book:  Liberty High School Men's Basketball - (Peoria, AZ) - powered by LeagueLineup.com

EJ enjoying a break!

Below are the statistical details of EJ's Liberty career, details for those who follow in "Dave the Fan and Dan the Fan's footsteps, rolled up programs and all"

Thanks again for everyone's support over the past 18 years!!!  It's a journey! Love you EJ!

God Bless to all my family friends and keep hooping - as Sarafina starts at Liberty next week maybe she is the next Paula Pyers?  Enjoy some family pics, highlights from EJ’s season, and stats.

Senior night included a suprise from the David’s and Scotty McIver (#24 at GCU), and Grandma/Grandpa in the house!

A solid addition of team Feitl in town to watch Liberty and EJ put up big numbers!

Update on EJ’s College Choice!

"Give me Liberty!"  

// The Wire

EJ has great vision under pressure. Always puts the ball in the right place, at the right time.

EJ's Senior Year in Video (his youtube channel playlist)

https://youtube.com/playlist?list=PLQDSCrLQyukCHsIJkFaswFonSKl_OXmLk

Break away after steal for easy 2.

2021-2022 Liberty Lions Varsity Basketball Team

Back Row: 40 Brennen Neal SG • Jr., 12 Davis Hester PG, SG • Jr., 42 Thomas Oughton C, PF • Sr., 32 Ky Green SG, PG, SF • Jr. • 6' 6" • 200 lbs., 20 Caden Hunter PF • Jr. • 6' 5", 44 Ring Nyeri PF, C • So., 22 Charlie Berry SF • Sr., 5 Trevor Daukas PG • Jr.

Front Row: 24 Trevor Owens PG, SG • Sr. • 6' 0", 10 Evan Woolery Sr., 30 EJ Pyers PG • Sr. • 5' 10" • 140 lbs., 34 Nick Zambito PG, SG • Jr. • 5' 10" • 160 lbs., 14 Landon Andersen PG • Sr.

EJ's Individual Statistics of Note For Career:

All-Region 6A Team - Honorable Mention (2021 and 2022)

Assists (6A Desert Valley - Conference)

1st - 2022:  4.3 assists

1st - 2021:  5.8 assists

Assists (6A - AZ State)

3rd - 2022:  (117 assists)

1st - 2021:  (123 assists)

1st - 2 year total (240) - 2021, 2022

Free Throws % (6A Desert Valley - Conference)

2nd - 2022:  85% (29-34)

Steals (6A Desert Valley - Conference)

4th – 2022:  (55, 2.0 spg)

11th – 2021: (25, 1.5 spg)

eFG% (Effective Field Goal %)

13th - 2022 - (51%)

10th - 2021 - (53%)

2 Point %

12th – 2022 (52%) – 30/58

26th – 2021 (48/%) – 28/58

3 Point %

15th  – 2022 (35%)

7th – 2021 (43%)

Offensive Rebounds

1st among all guards in Arizona 6A (Maxpreps)

5th – 2022 (50)

8th – 2021 (31)

Senior Night 2022!

Liberty Team Accomplishments

2021-2022  (Team Record 19-8 (9-1) Conference Champions)

1-- #1 in State of AZ in Scoring (80.9 points per game)

1 -- #15 in the Nation in Scoring (80.9 points per game).  -- National Basketball Scoring - MaxPreps

2 -- #1 in State of AZ in 3 pointers (11.0 made per game)

2 -- #7 in the Nation in 3 Pointers (11.0 made per game) -- National Basketball 3-Point - MaxPreps

3 -- #5 in State of AZ in Assists PPG – (18.5 assists per game)

3 -- #75 in the Nation in Assists (18.5 assists per game) -- National Basketball Assists - MaxPreps

4-- #8 in the State of AZ in Steals per game (14.9 steals per game)

4-- #115 in the Nation in Steals for game (14.9 steals per game) - National Basketball Steals - MaxPreps

5-- #11 in the State of AZ in Rebounds per game (35.6)

5-- #354 in the Nation in Rebounds per game (35.6)

6 –  Team record 2nd Most points ever in a single game (121). Last year’s team scored 124! Also scored 108 in basically 3 quarters. 

7—Score over 100 points four times during the season!  Multiple games played in 90's with running 4th quarter clock (mercy rule).   No shot clock in AZ!

2020-2021 Liberty Varsity Basketball Team

Back Row: 40 Patrick Steitz G • Sr. • 6' 7", 14 Trevor Owens, G, SG • Jr. • 6' 0", 5 Brett Chappell G • Sr. • 6' 4" • 180 lbs, 12 Davis Hester G • So. • 5' 10"

Front Row: 34 Blake Johnson G • Sr. • 6' 5", 30 EJ Pyers G • Jr. • 5' 10" • 140 lbs, 20 Caden Hunter PF • So. • 6' 5", 10 Evan Woolery G • Jr. • 6' 0", 20 Sebastian Clancy SG • Sr. • 6' 4", 22 Miles Guliford G, SG • Sr. • 6' 1", 24 Aj Snow SG • Sr. • 6' 0", 50 Kaden Schiefelbein. G • Sr. • 6' 2". 32 Jared Cunningham G • Sr. • 6' 2", 44 Jacob Ayars G • Sr. • 6' 4"

Enjoy my recap from the 2020-21 Season at my blog: EJ Pyers - Liberty High School Basketball Recap 2020-21 Season — WiresWorld

2020-2021  (Team Record 17-3 (10-0 Conference  - Conference Champs))

1-- #1 in State of AZ in Scoring (88.3 points per game)

1 -- #2 in the Nation in Scoring (88.3 points per game).  -- National Basketball Scoring - MaxPreps

1 – School Record:  Highest PPG in season ever for Liberty Team – (88.3 PPG)

2 -- #1 in State of AZ in 3 pointers (13.9 made per game)

2 -- #18 in the Nation in 3 Pointers (13.9 made per game) -- National Basketball 3-Point - MaxPreps

2 – School Record:  3 Pointers Made (13.9 made per game)

3 -- #2 in State of AZ in Assists PPG – (19.1 assists per game)

3 -- #35 in the Nation in Assists (19.1 assists per game) -- National Basketball Assists - MaxPreps

3 – School Record:  Assists average per game (19.1 assists per game)

4-- #2 in the State of AZ in Steals per game (15.2 steals per game)

4-- #45 in the Nation in Steals for game (15.2 steals per game) - National Basketball Steals - MaxPreps

4—School Record:  Steals per game (15.2 steals per game)

5 – School Record:  Most points in a single game (124). Broke record 2x:  scoring 110 points vs Dobson and then later breaking record with 124 points vs Kofa.

6—School Record: 44 points in a quarter (scored 40 or more 2x, over 30 multiple times).  No shot clock in AZ!

7—School Record: Score over 100 points five times during the season, including the first round of the state playoffs!

*******************************************************

Conference Champions! Senior Night 2022. Give me Liberty!

Read More
Philosophy Jim Pyers Philosophy Jim Pyers

The Dismantling of America: Thomas Sowell

This amazing, prescient, work by Dr. Thomas Sowell lays out everything that is happening in today culture and society, which is impressive to say the least. More impressive is that Dr. Sowell explains “why” it is happening and presents a strong case for the conservative approach to saving what is left of a once FREE and powerful country. I thoroughly enjoyed the deep but easily understandable explanation by Sowell. Here are links to purchase the book - I listened via audible (an Amazon company sadly):

Visit : Dismantling America: and other controversial essays: Sowell, Thomas: 9780465022519: Amazon.com: Books or Dismantling America: And Other Controversial Essays / Edition 1 by Thomas Sowell | 9780465022519 | Hardcover | Barnes & Noble® (barnesandnoble.com)

Enjoy! For a decent complete list of Dr. Sowell’s work try this one: Thomas Sowell | Complete Writings (tsowell.com)

Read More
Family Jim Pyers Family Jim Pyers

Daniel “Dani” Minder

Dani Minder

Dani in the Lenzerheide

Pyers & Minder’s in Escondido

Daniel “Dani” Minder

July 21st, 1967 - May 2, 2022

English NOTE: Update: May 2, 2023. We love you and miss you Dani! More to come.

Daniel "Dani" Minder passed away from an unforeseen heart ailment on May 2, 2022 in his hometown of Zurich Switzerland.  Dani was born on July 21st, 1967 in Zurich.  The first born of Heinz and Ilse Minder, Dani grew up as part of his parents restaurant lifestyle in the heart of an area that served via Bahnhof (Train Station) Restaurant Bahnhof Buffet Selnau (part of Zurich).  Dani attended his early school and trade education in Zurich and studied as an electrician.   He worked several years as an electrician before following his passion of martial arts.  At an early age (late teens), Dani became immersed in Kung Fu and first became an outstanding student, then competitor, successfully competing in National and International Kung Fu tournaments; Ultimately achieving the honor of Swiss National Champion in Wushu while competing successfully in amateur Kick Boxing events as well;  later following his outstanding athletic career, Dani became a professional teacher of the Martial Arts, managing and running several schools, and earning his living as a class teacher and private instructor to many grateful students and martial artists & athletes. (including me! Thank you Sifu Dani)

Dani is survived by his son Goran Marceta, sister Jasmin Minder Pyers, nephew Elijah Pyers, niece Sarafina Pyers, niece Natalie Foley (Pyers), Aunt Maria Fleischer (sister of Ilse Minder - Dani’s mother) and Goran's mother (Zorica Marceta).  Dani has many friends, cousins and relatives in Switzerland that will be updated and acknowledged as time permits. Me, I am Dani’s brother-in-law (married to Jasmin).   

Dani was preceded in passing by his mother Ilse Minder Thomas (Oct 26, 1939 - Jan 22, 2021), his Grandmother Josefine WEBER-Meyer (Dec 20th, 1916 - Sept 28, 2010) and his father Heinz Minder (August 15 1936 - May 2 1984).   Daniels grandmother (Omi) Theresia Brenner (Geburtstag: September 17 1906 - Todestag: March 28 1975).

God bless Dani and may his soul rest in peace in heaven with his family.

Dani's Amazing Athletic Accomplishments in Kung Fu and Martial Arts (Biography):

  • Seven (7) Time Swiss National Champion Swiss Wushu Federation in Forms und Sanda 1995-2000

  • 1997-2000 Member of Swiss National Team in Swiss Wushu Federation in various disciplines

  • Top ten classifications at European Championships and World Championships in Nan Dao, Nan Gun and Nan Quan

  • 1. Place Nan Dao European Championships in Athens, Greece 1998

  • 10th Place - Nan Gun World Championships 1999 in Hong Kong (highest non-Asian finisher)

  • 1996, 1998, 2000, 2002 Participated in the World Chin Woo Championships

Dani in China at the Wall.

Top Ten Finishes

  • 2000 in Tianjin (China) 2 Bronze Medals in Nan Dao und Nan Quan

  • 2002 Chin Woo Grand Champion in Dallas, Texas USA. Competed in 10 different disciplines and won 3 Gold Medals, 4 Silver Medals, and 3 Bronze.

  • 2003 Retirement as a Tournament Athlete

****** Links to Dani's Website, Video & Personal Media

// Dani's Martial Arts School Site - many pictures and videos - Dani's background and biography

STARTSEITE (cma-y.ch)

// Social Sites

Daniel Minder | Facebook

// Example of Kung Fu Techniques - Dani

Chin Woo - Kung Fu (2003) on Vimeo

German:

Daniel „Dani“ Minder verstarb am 2. Mai 2022 in seiner Heimatstadt Zürich an einem unvorhergesehenen Herzleiden. Dani wurde am 21. Juli 1967 in Zürich, Schweiz, geboren. Als Erstgeborener von Heinz und Ilse Minder wuchs Dani im Rahmen des Restaurantlebens seiner Eltern im Herzen von Zürich in Selnau auf. Dani besuchte seine Realschule und Gewerbeschule in Zürich und wurde als Elektriker ausgebildet. Er arbeitete mehrere Jahre als Elektriker, bevor er seiner Leidenschaft, dem Kampfsport, nachging. Schon in jungen Jahren (später Teenageralter) tauchte Dani in Kung Fu ein und wurde zuerst ein hervorragender Schüler, dann Konkurrent, der erfolgreich an nationalen und internationalen Kung Fu-Turnieren teilnahm und schließlich die Ehre des Schweizer Meisters in Wushu erlangte, während er erfolgreich im Amateurbereich antrat Auch Kickbox-Events; Später, nach seiner herausragenden sportlichen Karriere, wurde Dani ein professioneller Lehrer der Kampfkünste, leitete und leitete mehrere Schulen und verdiente seinen Lebensunterhalt als Klassen- und Privatlehrer für viele dankbare Schüler, Kampfkünstler und Athleten.

Dani hinterlässt seinen Sohn Goran Marceta, Schwester Jasmin Minder Pyers, Neffe Elijah Pyers, Nichte Sarafina Pyers, Nichte (durch Heirat) Natalie Foley (Pyers), Tante Maria Fleischer (Schwester von Ilse Minder) und Gorans Mutter (Zorica Marceta). Dani hat viele Cousins ​​und Verwandte in der Schweiz, die anerkannt würden, wenn mehr Zeit für die Forschung eingeräumt würde.

Dani ging im Vorbeigehen seine Mutter Ilse Minder Thomas (26.10.1939 - 22.01.2021), seine Großmutter Josefine WEBER-Meyer (20.12.1916 - 28.09.2010) und sein Vater Heinz Minder (15.08.2010) voraus. 1936 - 1984). Daniels Großmutter (Omi) Theresia Brenner (Geburtstag: 17.9.1906 - Todestag: 28.3.1975).

Dani's Wichtigste Turniererfolge:

  • Seven (7) facher Schweizermeister der Swiss Wushu Federation in Formen und Sanda 1995-2000

  • 1997-2000 Mitglied der Nationalmanschaft der Swiss Wushu Federation verschiedene dizipline

  • Top Ten Klassierungen an Europameisterschaften und Weltmeisterschaften in Nan Dao, Nan Gun und Nan Quan

  • 1. Rang Nan Dao EM in Athen Griechenland 1998

  • 10. Rang - Nan Gun Welt Meisterschaft 1999 in Hongkong (bester nicht Asiate)

  • 1996, 1998, 2000, 2002 Teilnahme an den World Chin Woo Meisterschaften

Verschiedene Top Ten Klassierungen

  • 2000 in Tianjin (China) 2 mal 3. Rang in Nan Dao und Nan Quan

  • 2002 Chin Woo Grandchampion in Dallas TX Start in 10 verschiedenen Disziplinen 3 mal 1. Rang 4 mal 2. Rang 3 mal 3. Rang

  • 2003 Rücktritt als Turnier-Athlet

****** Links to Dani's Website, Video & Personal Media

// Dani's Martial Arts School Site - many pictures and videos - Dani's background and bio

STARTSEITE (cma-y.ch)

// Social Sites

Daniel Minder | Facebook

// Example of Kung Fu Techniques - Dani

Chin Woo - Kung Fu (2003) on Vimeo

Read More
Podcasts Jim Pyers Podcasts Jim Pyers

UnWired Episode 33: “Kyrie Irving - The Greatest of All Time”

SHOWNOTES

UnWired, Episode 33 - for Sunday, December 5th, 2021:  

Show Title:  “Kyrie Irving - The Greatest of All Time”

Support the Show: Donate — WiresWorld

Show NOTES: Blog 2 — WiresWorld

Show Credits & Sources & More Credits:

As always, thanks to Adam & John of the NoAgenda Show - as many of the clips I use for the show come from their collection and recent productions.  In full disclosure, I am a regular producer of the NoAgenda show and recommend to anyone listening to try it out.   Visit Noagendashow.net or bing or duckduckgo Adam Curry or John C. Dvorak.  Thanks to Moe Factz too!

Home - MOE FACTZ

No Agenda (noagendashow.net)

Show Support:

Please support the UnWired show at wiresworld.net/donate.  We are the value for value model - if they show brings any value to you - contribute what you can.   Also - clips, jingles, ideas welcome - and you will be credited for any contribution we use on the show - as a show producer.  

Media Sources: 

// Tucker & Whitlock - on Chappelle and Kyrie

Tucker Carlson - Jason Whitlock On Kyrie Irving Getting Kicked Out Of The NBA - YouTube


// Fox & Friends - Kilmeade & Kanter

Enes Kanter calls out NBA: When the things they criticize affect their business, 'they stay silent' (yahoo.com)

// Moe Factz - Show #70

http://adam.curry.com/html/MoeFactz70FourFreedo-3b6wGpzJCrK1xG9r5486h28lTmpWPL.html

// First Take - Stephen A Smith

https://www.youtube.com/watch?v=eh7d1r63-2A


// Breakfast Club

https://www.youtube.com/watch?v=xuD6w_xJPnA

// Kyrie on Vaccine Mandates

https://www.youtube.com/watch?v=lJA8IDn8SUs

// Draymond on Vaccine / Wiggins

https://www.youtube.com/watch?v=PG6bIyaHDNM


// jonathon issacs on vaccine status

https://www.youtube.com/watch?v=FNZE9hdBTEE

Read More
Jim Pyers Jim Pyers

UnWired Episode 32: The Greatest Mind You Never Heard of - Thomas Sowell

SHOWNOTES

UnWired, Episode 32 - for Friday, November 16, 2021

Show Title:  Thomas Sowell

SubTitle: Lockdown Counter:  Day 449s (on pause)

Credits: 

Kevin Keith - Music:

More Credits:

As always, thanks to Adam & John of the NoAgenda Show - as many of the clips I use for the show come from their collection and recent productions.  In full disclosure, I am a regular producer of the NoAgenda show and recommend to anyone listening to try it out.   Visit Noagendashow.net or bing or duckduckgo Adam Curry or John C. Dvorak. 

Show Support:

Please support the UnWired show at wiresworld.net/donate.  We are the value for value model - if they show brings any value to you - contribute what you can.   Also - clips, jingles, ideas welcome - and you will be credited for any contribution we use on the show - as a show producer.  

Thomas Sowell

Thomas Sowell QUOTES:

"The next time some academics tell you how important diversity is, ask how many Republicans there are in their sociology department." - Thomas Sowell

Diversity, Important, Liberty  "Random Thoughts". Column at Jewish World Review, www.jewishworldreview.com. July 31, 1998.


"Racism is not dead, but it is on life support -- kept alive by politicians, race hustlers and people who get a sense of superiority by denouncing others as 'racists'" - Thomas Sowell

Race, Racism, People


"When you want to help people, you tell them the truth. When you want to help yourself, you tell them what they want to hear."

- Thomas Sowell

Truth, Lying, Helping Others

Thomas Sowell (2011). “The Thomas Sowell Reader”, p.398, Basic Books


"Not since the days of the Hitler Youth have young people been subjected to more propaganda on more politically correct issues. At one time, educators boasted that their role was not to teach students what to think but how to think. Today, their role is far too often to teach students what to think on everything from immigration to global warming to the new sacred trinity of 'race, class and gender.'" - Thomas Sowell

Thinking, Class, Issues

"What do you call it when someone steals someone else's money secretly? Theft. What do you call it when someone takes someone else's money openly by force? Robbery. What do you call it when a politician takes someone else's money in taxes and gives it to someone who is more likely to vote for him? Social Justice."  - Thomas Sowell

"It is bad enough that so many people believe things without any evidence. What is worse is that some people have no conception of evidence and regard facts as just someone else's opinion." - Thomas Sowell

Believe, People, Self Improvement


"ANYONE who studies the history of ideas should notice how much more often people on the political left, more so than others, denigrate and demonize those who disagree with them - instead of answering their arguments." -  Thomas Sowell

Ideas, People, Political

"Blacks were not enslaved because they were black but because they were available. Slavery has existed in the world for thousands of years. Whites enslaved other whites in Europe for centuries before the first black was brought to the Western hemisphere. Asians enslaved Europeans. Asians enslaved other Asians. Africans enslaved other Africans, and indeed even today in North Africa, blacks continue to enslave blacks." -  Thomas Sowell

Years, Europe, Black


"Activism is a way for useless people to feel important, even if the consequences of their activism are counterproductive for those they claim to be helping and damaging to the fabric of society as a whole." -  Thomas Sowell

People, Important, Fabric

Thomas Sowell (2006). “Ever Wonder Why? and Other Controversial Essays”, Hoover Inst Press

"Whether Barack Obama is simply incompetent as president or has some hidden agenda to undermine this country, at home and abroad, he has nearly everything he needs to ruin America, including a fool for a vice president." - Thomas Sowell

Country, Home, America

"One of the painful signs of years of dumbed-down education is how many people are unable to make a coherent argument. They can vent their emotions, question other people's motives, make bold assertions, repeat slogans-- anything except reason." - Thomas Sowell

Education, Years, People

Thomas Sowell (2011). “Dismantling America and Other Controversial Essays (Large Print 16pt)”, p.403, ReadHowYouWant.com

"One of the most important reasons for studying history is that virtually every stupid idea that is in vogue today has been tried before and proved disastrous before, time and again." -  Thomas Sowell

Stupid, Ideas, Important

"It is so easy to be wrong-and to persist in being wrong-when the costs of being wrong are paid by others." - Thomas Sowell

Cost, Easy, Persist

Thomas Sowell (1995). “The Vision of the Anointed: Self-congratulation as a Basis for Social Policy”, p.136, Basic Books

"We seem to be getting closer and closer to a situation where nobody is responsible for what they did but we are all responsible for what somebody else did." - Thomas Sowell

Liberty, Economic, Responsible

Thomas Sowell (2013). “Ever Wonder Why?: and Other Controversial Essays”, p.332, Hoover Press

"There is usually only a limited amount of damage that can be done by dull or stupid people. For creating a truly monumental disaster, you need people with high IQs." - Thomas Sowell

Stupid, Creating, People

Thomas Sowell (2011). “Dismantling America and Other Controversial Essays (Large Print 16pt)”, p.98, ReadHowYouWant.com


"No one will really understand politics until they understand that politicians are not trying to solve our problems. They are trying to solve their own problems - of which getting elected and re-elected are number one and number two. Whatever is number three is far behind." - Thomas Sowell

Two, Numbers, Political

Thomas Sowell (2011). “Dismantling America and Other Controversial Essays (Large Print 16pt)”, p.88, ReadHowYouWant.com


"If you truly believe in the brotherhood of man, then you must believe that blacks are just as capable of being racists as whites are. - Thomas Sowell

Believe, Men, Brotherhood Of Man


'A friend from India told me that a countryman of his said: "I want to go to America. I want to see a country where poor people are fat."'

- - Thomas Sowell

Country, America, People

"In a democracy, we have always had to worry about the ignorance of the uneducated. Today we have to worry about the ignorance of people with college degrees." - Thomas Sowell

Ignorance, College, People

Thomas Sowell (2011). “Dismantling America and Other Controversial Essays (Large Print 16pt)”, p.402, ReadHowYouWant.com

"There is nothing so bad that politics cannot make it worse." - Thomas Sowell

Political, Liberty, Libertarian

"One of the sad signs of our times is that we have demonized those who produce, subsidized those who refuse to produce, and canonized those who complain." - Thomas Sowell

President, Liberty, Complaining

Thomas Sowell (2013). “Ever Wonder Why?: and Other Controversial Essays”, p.328, Hoover Press


"Intellect is not wisdom." - Thomas Sowell

Intellect

"One of the many disservices done to young people by our schools and colleges is giving them the puffed up notion that they are in a position to pass sweeping judgments on a world that they have barely begun to experience."  - Thomas Sowell

More Thomas Sowell Resources:

// Thomas Sowell Links

Thomas Sowell - Wikipedia


// Thomas Sowell Website

Index (tsowell.com)


// Thomas Sowell Books

Thomas Sowell | Complete Writings (tsowell.com)

// Thomas Sowell quotes

Thomas Sowell | Favorite Quotations (tsowell.com)


// Thomas Sowell PHotographs

Thomas Sowell | Photographs (tsowell.com)

// Thomas Sowell - More Quotes

Thomas Sowell Quotes (Author of Basic Economics) (goodreads.com)

// Common Sense - Thomas Sowell

https://www.youtube.com/watch?v=WK4M9iJrgto&t=2665s

// More Recent - Thomas Sowell

https://www.youtube.com/watch?v=aXHQiVsA0bM

// Watch any/all of the videos with Thomas Sowell on the Hoover Institute Channel - here is one

https://www.youtube.com/watch?v=mS5WYp5xmvI

TOP 25 QUOTES BY THOMAS SOWELL (of 605) | A-Z Quotes (azquotes.com)

Thomas Sowell - "Rotten Public Schools"

https://www.youtube.com/watch?v=aaFAUeftTKs


Read More
Philosophy Jim Pyers Philosophy Jim Pyers

Iatrogenics

“The famously mistreated Austro-Hungarian doctor Ignaz Semmelweis had observed that more women died giving birth in hospitals than giving birth on the street. He called the establishment doctors a bunch of criminals—which they were: the doctors who kept killing patients could not accept his facts or act on them since he “had no theory” for his observations. Semmelweis entered a state of depression, helpless to stop what he saw as murders, disgusted at the attitude of the establishment. He ended up in an asylum, where he died, ironically, from the same hospital fever he had been warning against. Semmelweis’s story is sad: a man who was punished, humiliated, and even killed for shouting the truth in order to save others. The worst punishment was his state of helplessness in the face of risks and unfairness. But the story is also a happy one—the truth came out eventually, and his mission ended up paying off, with some delay. And the final lesson is that one should not expect laurels for bringing the truth.”

— Antifragile: Things That Gain from Disorder (Incerto Book 3) by Nassim Nicholas Taleb

https://a.co/6c4UQmH

Read More