




In this article, we’re going to present some of my favorite Python object oriented and meta programming features to refactor code, as well as other basics required to kill it with Python, a bit of packaging, some testing patterns. I hope this will help my friends to understand my code and to adopt a global vision for their internal libraries set.
Object Oriented allows to refactor and tidy up code by binding a state (attributes, variables) to a bunch of functions (called methods).
For example:
animals = [
dict(type='dog', name='Brutus'),
dict(type='cat', name='Tom'),
]
def animal_noise(animal):
if animal['type'] == 'dog':
return 'waf'
elif animal['type'] == 'cat':
return 'miaou'
for animal in animals:
print(animal_noise(animal))
Becomes:
class Animal:
def __init__(self, name):
self.name = name
class Dog(Animal):
def noise(self):
return 'waf'
class Cat(Animal):
def noise(self):
return 'miaou'
This encapsulation organizes code by associating data (attributes) with behavior (methods) in a single entity, allowing for Cyclomatic Complexity reduction (we’ll dig into that later):
animals = [
Dog(name='Brutus'),
Cat(name='Tom'),
]
for animal in animals:
print(animal.noise())
Class attributes are defined directly in the class body and shared by all
instances, while instance attributes are set per object, typically in
__init__
.
Let’s see this with an example:
class Animal:
def __init__(self, name):
self.name = name
class Dog(Animal):
# this defines a **class attribute** noise with value 'waf'
noise = 'waf'
def bark(self):
# this returns the `noise` attribute of the object if any,
# otherwise it will return the `noise` attribute of the class
return self.noise
Create a Dog object with name brutus:
>>> dog = Dog('brutus')
The object attributes are stored in the object’s __dict__
attribute:
>>> dog = Dog('brutus')
>>> dog.name
'brutus'
>>> dog.__dict__
{'name': 'brutus'}
Call the bark()
method, it will use the noise
class attribute:
>>> dog.bark()
'waf'
We can set a noise
object attribute on the dog
object, then
self.noise
will return the object attribute:
>>> dog.noise = 'wafwaf'
>>> dog.__dict__
{'name': 'brutus', 'noise': 'wafwaf'}
>>> dog.bark()
'wafwaf'
Note that this doesn’t change the noise
class attribute!!
>>> Dog.noise
'waf'
As we can see with another dog instance
>>> other_dog = Dog('lassie')
>>> other_dog.bark()
'waf'
You may also define methods that won’t require instantiating a class:
class Crocodile:
class_attribute_example = 'class attribute value'
@staticmethod # makes the function "unbound" to the class or object
def static_method_example(some_argument):
return some_argument
@classmethod # binds the function to the class
def class_method_example(cls, some_argument):
return cls, cls.class_attribute_example, some_argument
Static methods are like regular functions defined in a class, receiving neither the instance (self) nor the class (cls) automatically.
>>> Crocodile.static_method_example('test')
'test'
The class method is bound to the class, you don’t need to instanciate the class into an object to call it, see the 3 values it returns:
cls
is the class typeclass attribute value
is the value of class_attribute_example
some_argument
is a normal argument>>> Crocodile.class_method_example('test')
(<class '__main__.Crocodile'>, 'class attribute value', 'test')
Note that even if you instanciate a Crocodile, its class method will still be bound to the class, not to the object:
>>> obj = Crocodile()
>>> obj.class_attribute_example = 'object attribute override'
>>> obj.class_method_example('test')
(<class '__main__.Crocodile'>, 'class attribute value', 'test')
Inheritance happens when a class inherits from another class:
class Animal: # doesn't inherit from any class, is an instance of type under the hood
def move(self, distance, speed=5):
return distance * speed
class Dog(Animal): # inherit from Animal, Dog is a subclass of Animal, Animal is superclass of Dog
def move(self, distance, speed=10):
return super().move(distance, speed)
No surprise calling Animal.move
:
>>> animal = Animal()
>>> animal.move(3)
15
Calling Dog.move
will call its superclass’s move()
via super().move()
:
>>> dog = Dog()
>>> dog.move(3)
30
Read more about the Python Data Model.
In Python, a class may inherit from several classes, let’s see the behaviour of
super()
here:
class Animal:
def __init__(self, name, legs=4):
self.name = name
self.legs = legs
print('New Animal in the wild!')
class Biped(Animal):
def __init__(self, name, legs=2):
print(f'Biped.__init__ start!')
super().__init__(name, legs)
print(f'Biped.__init__ over!')
class Cute(Animal):
def __init__(self, *args, **kwargs):
print(f'Cute.__init__ start!')
super().__init__(*args, **kwargs)
print(f'Cute.__init__ over!')
class Duck(Biped, Cute):
def __init__(self, *args, **kwargs):
print(f'Duck.__init__ start!')
super().__init__(*args, **kwargs)
print(f'Duck.__init__ over!')
Python’s Method Resolution Order (“MRO”) defines the order in which parent classes are searched for methods, following a left-to-right, depth-first approach. For Duck(Biped, Cute), the order is Duck → Biped → Cute → Animal → object.:
>>> Duck('Daffy')
Duck.__init__ start!
Biped.__init__ start!
Cute.__init__ start!
New Animal in the wild!
Cute.__init__ over!
Biped.__init__ over!
Duck.__init__ over!
Duck.__init__
calls super().__init__
, which invokes Biped.__init__
.Biped.__init__
calls super().__init__
, which invokes Cute.__init__
.Cute.__init__
calls super().__init__
, which invokes Animal.__init__
.Animal.__init__
executes, then control returns up the chain, completing
each __init__
in reverse order.Duck typing avoids rigid type checks, allowing any object with a quack method to be used, regardless of its class.
It relies on the principle: “If it walks like a duck and quacks like a duck, it’s a duck,” enabling polymorphic behavior without inheritance.
Basically, don’t do:
class Duck:
def quack(self):
return 'Duck in quacking'
def quack_it(obj):
if isinstance(obj, Duck):
return obj.quack()
else:
return 'Generic quack'
Instead, do:
def quack_it(obj):
try:
return obj.quack()
except AttributeError: # no quack attribute on obj
return 'Generic quack'
This gives you the freedom of substituting the Duck class with something else:
class Dog: # doesn't inherit from Duck
def quack(self):
return 'Dog somehow quacks!'
Allowing for more creativity and especially refactoring patterns.
Magic methods in Python are special methods with double underscores (e.g.,
__init__
, __str__
) that define how objects behave with built-in operations.
They allow customization of class behavior, like initialization (__init__
),
string representation (__str__
), arithmetic (__add__
), or comparison
(__eq__
). They’re called implicitly by Python when certain operations are
performed on objects.
Example:
class Animal:
def __init__(self, name, speed):
self.name = name
self.speed = speed
def __str__(self):
return self.name
def __eq__(self, other):
return self.speed == other.speed
def __add__(self, other):
return self.speed + other.speed
Example:
>>> brutus = Animal('brutus', 5)
>>> str(brutus) # call brutus.__str__
'brutus'
>>> tom = Animal('tom', 5)
>>> brutus == tom # call brutus.__eq__(tom)
True
>>> brutus + tom # call brutus.__add__(tom)
10
Other common magic methods include __len__
for length and __iter__
for
iteration. See the full list in Python Data
Model
We can also override the behavior for attributes:
class Anything:
def __init__(self):
self.data = dict()
def __getattr__(self, name):
return self.data[name]
def __setattr__(self, name, value):
if name == 'data':
super().__setattr__(name, value)
else:
self.data[name] = value
def __delattr__(self, name):
del self.data[name]
This will store any attribute which name isn’t data
into the data
object
attribute:
>>> obj = Anything()
>>> obj.foo = 'bar'
>>> obj.data
{'foo': 'bar'}
>>> obj.__dict__
{'data': {'foo': 'bar'}}
>>> obj.foo
'bar'
>>> del obj.foo
>>> obj.__dict__
{'data': {}}
A Design Pattern is a reusable solution to a common software design problem, providing a template for solving it in a specific context. The Gang of Four (Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides) authored the seminal book Design Patterns (1994), which categorized 23 classic patterns into creational, structural, and behavioral types.
Find a bunch of Design Pattern examples in Python. They can serve as reference, for inspiration, but it’s always up to you to be creative and find the pattern that introduces the less Cyclomatic Complexity for a given problem.
A great Python tool to mesure that is lizard, it’s a static analysis tool that measures cyclomatic complexity, among other metrics, helping identify overly complex code.
Now, let’s move on to advanced and very cool Python OOP stuff!!
type()
Metaprogramming is writing code that manipulates or generates other code at runtime. It allows programs to treat code as data, enabling dynamic behavior like creating classes, modifying functions, or automating repetitive tasks.
For example, we can create a class on the fly with the type()
function, it’s
going to take the following arguments:
Example:
class Animal:
speed = None
Dog = type('Dog', (Animal,), dict(speed=15))
Is exactly equivalent to:
class Animal:
speed = None
class Dog(Animal):
speed = 15
For some reason, I need to create a bunch of subclasses of Animal
, it’s
boring, let’s for-loop this:
class Animal:
speed = None
animal_speeds = dict(Dog=15, Mouse=10, Spider=5, Duck=4, Cat=8)
locals().update({
subclass_name: type(subclass_name, (Animal,), dict(speed=speed))
for subclass_name, speed in animal_speeds.items()
})
Let’s see what’s happening here:
dict
of the current local symbols (variables, classes, etc).update()
is the dict.update()
method, which allows to apply another
dict in itanimal_speeds
dictAs such, we can use our classes exactly as if we had declared them manually:
>>> Dog.speed
15
>>> Cat.speed
8
>>> Spider.speed
5
>>> Mouse.speed
10
__init_subclass__
is a special class method in Python, introduced in Python
3.6, that allows a parent class to customize the creation of its subclasses.
It’s called automatically when a subclass is defined, enabling the parent to
modify or validate the subclass without using a metaclass.
class Plugin:
registry = dict()
def __init_subclass__(cls, **kwargs):
cls.name = cls.__name__
cls.registry[cls.__name__] = cls
class NetworkPlugin(Plugin):
pass
class SystemPlugin(Plugin):
pass
This allows to modify the subclass on the fly, setting the name
class
attribute automatically:
>>> NetworkPlugin.name
'NetworkPlugin'
>>> SystemPlugin.name
'SystemPlugin'
Also, we’re consolidating a registry of subclasses:
>>> Plugin.registry
{'NetworkPlugin': <class '__main__.NetworkPlugin'>, 'SystemPlugin': <class '__main__.SystemPlugin'>}
Note that you can pass custom arguments to __init_subclass__
via subclass
definitions, e.g., class NetworkPlugin(Plugin, type=‘network’).
Metaclasses in Python are classes that define how other classes are created.
They control class creation by implementing __new__
or __init__
methods,
allowing customization of class behavior, attributes, or methods. The default
metaclass is type.
Metaclasses in ORMs like SQLAlchemy automate table creation and attribute mapping by dynamically generating class attributes at definition time.
Let’s see how to use subclass names to generate table names for example:
import re
def camel_to_snake(name):
# Add underscore before any uppercase letter, then convert to lowercase
# Example: camel_to_snake('CamelCase') == 'camel_case'
return re.sub(r'(?<!^)(?=[A-Z])', '_', name).lower()
class ModelMetaclass(type):
def __new__(cls, name, bases, attrs):
new_cls = super().__new__(cls, name, bases, attrs)
new_cls.table_name = camel_to_snake(name)
return new_cls
class Model(metaclass=ModelMetaclass):
pass
class FlyingDog(Model):
pass
class AstronautCat(Model):
pass
Demonstration:
>>> FlyingDog.table_name
'flying_dog'
>>> AstronautCat.table_name
'astronaut_cat'
Moving on to the other half of ORM’s secret sauce: the Descriptor Protocol.
Python descriptors are objects that define how
attribute access is handled in a class. They implement __get__
, __set__
,
and/or __delete__
methods to control attribute behavior. Descriptors are used
for reusable attribute logic, like validation or computed properties.
class Field:
def __init__(self, default=None):
self.default = default
# this will be set by the metaclass to avoid repeating the name
# ie: foo = Field()
# instead of: foo = Field(name='foo')
# which would be boring
self.name = None
def __get__(self, obj, objtype=None):
return obj.data.get(self.name, self.default)
def __set__(self, obj, value):
obj.data[self.name] = value
class PositiveField(Field):
def __set__(self, obj, value):
if value < 0:
raise ValueError("Must be positive")
super().__set__(obj, value)
class ModelMetaclass(type):
def __new__(cls, name, bases, attributes):
for name, field in attributes.items():
if isinstance(field, Field):
field.name = name
return super().__new__(cls, name, bases, attributes)
class Model(metaclass=ModelMetaclass):
def __init__(self, **kwargs):
# kwargs will serve as initial data
self.data = kwargs
Demonstration:
class Item(Model):
name = Field()
price = PositiveField(default=0)
>>> boat = Item(name='Yacht Shaver Sailboat')
>>> boat.data
{'name': 'Yacht Shaver Sailboat'}
>>> boat.name
'Yacht Shaver Sailboat'
>>> boat.name = 'Loustig'
>>> boat.name
'Loustig'
>>> boat.data
{'name': 'Loustig'}
>>> boat.price = -10 # let's see if they can pay me to take their because I'm a PIIIRRAAATTEEEEE!!!
Traceback (most recent call last):
File "<python-input-15>", line 1, in <module>
boat.price = -10 # let's see if they can pay me to take their boat because I'm a PIIIRRAAATTEEEEE!!!
^^^^^^^^^^
File "<python-input-5>", line 20, in __set__
raise ValueError("Must be positive")
ValueError: Must be positive
That is not only super fun to work with, but also, incredibly powerful in refactoring!
Oh, you thought you could go ahead and code Descriptors and Metaclasses without writing unit test? I mean, sure, the user of the Model and Field classes will be able to code without testing, but for the Descriptor and Metaclass part, you need to guarantee that it’s implemented right.
While pytest is more popular due to its simplicity and extensibility, unittest (Python’s built-in xUnit-style framework) is still widely used. Some prefer it because the test cases are object oriented, I dig that, but I still prefer it the other way around
Pytest is simpler to use, has amazing plugin support (I make many myself which I reuse accross projects) and provides much more friendly colored output.
Example:
def test_field_default_and_access():
class TestModel(Model):
foo = Field(default="bar")
obj = TestModel()
assert obj.foo == "bar", "Field should return default value"
obj.foo = "baz"
assert obj.foo == "baz", "Field should return set value"
assert obj.data["foo"] == "baz", "Field should store value in data"
Example plugins I made to write test code for me:
Now, you also want to check your “Code coverage”, which is the portions of your runtime code that is covered by the tests. We use the pytest-cov plugin for that.
Demonstration with the above code:
$ pytest -sv test_orm.py --cov=orm --cov-report=term-missing
======================================= test session starts ========================================
platform linux -- Python 3.13.3, pytest-8.4.0, pluggy-1.6.0 -- /usr/bin/python
cachedir: .pytest_cache
benchmark: 4.0.0 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
hypothesis profile 'default'
rootdir: /tmp/testp
plugins: asyncio-0.26.0, xdist-3.6.1, env-1.1.5, httpx-0.35.0, django-4.10.0, cov-6.1.1, chttpx-5.1.8.dev1+dirty, cansible-5.0.0rc2, prompt2-5.2.5.dev17+dirty, typeguard-4.4.3, benchmark-4.0.0, hypothesis-6.133.2, anyio-4.9.0
asyncio: mode=Mode.STRICT, asyncio_default_fixture_loop_scope=None, asyncio_default_test_loop_scope=function
collected 1 item
test_orm.py::test_field_default_and_access PASSED
========================================== tests coverage ==========================================
_________________________ coverage: platform linux, python 3.13.3-final-0 __________________________
Name Stmts Miss Cover Missing
--------------------------------------
orm.py 25 3 88% 19-21
--------------------------------------
TOTAL 25 3 88%
======================================== 1 passed in 0.56s =========================================
As we can see, our test isn’t covering lines 19 to 21 in our code, corresponding to… PositiveField! Indeed, we didn’t write a test for that one.
TDD (Test-Driven Development): A development practice where you write automated tests before writing code. The cycle is: write a failing test, write minimal code to pass the test, refactor. It ensures code is testable, reduces bugs, and drives design.
XP (Extreme Programming): An agile methodology emphasizing collaboration, simplicity, and rapid feedback. Key practices include pair programming, continuous integration, TDD, and frequent releases. It aims to improve software quality and adapt to changing requirements.Both prioritize iterative development and quality; TDD is a specific technique often used within XP.
TDD is a core practice in XP, but XP also includes non-testing practices like pair programming and collective code ownership.
TDD Cycle:
The pytest-watch or pytest-watcher will greatly help: the will watch your code for changes and re-run the tests automatically!
Oh, you thought you were going to develop 1337 code by print()‘ing your way and without using a debugger like a real programming professionnal? I don’t think so.
Let’s add a breakpoint at the beginning of the test function as such:
def test_field_default_and_access():
breakpoint()
class TestModel(Model):
# ... rest of code ...
Running pytest with -s
so that it doesn’t capture stdin:
$ pytest -sv test_orm.py --cov=orm --cov-report=term-missing -s
================================================ test session starts =================================================
....
test_orm.py::test_field_default_and_access
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> PDB set_trace >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
[25] > /tmp/testp/test_orm.py(5)test_field_default_and_access()
-> breakpoint()
5 frames hidden (try 'help hidden_frames')
(Pdb++)
You are left with a gdb like shell, here, I’m using pdb++ which I heavily recommend because it has syntax coloration.
First, you need to know that you can evaluate any Python code inside the shell:
(Pdb++) Field
<class 'orm.Field'>
(Pdb++) print('hi')
hi
With the help
command, we can see the list of pdb shell commands:
(Pdb++) help
Documented commands (type help <topic>):
========================================
EOF cl disable h l q s unt
a clear display help list quit step until
alias commands down hf_hide ll r sticky up
args condition ed hf_unhide longlist restart tbreak w
b cont edit ignore n return track whatis
break continue enable interact next retval u where
bt d exceptions j p run unalias
c debug exit jump pp rv undisplay
Miscellaneous help topics:
==========================
exec hidden_frames pdb
Undocumented commands:
======================
f frame hf_list inspect paste put source
Let’s review the most useful ones, first, to print the context of where the
Python interpreter is at, use l <line_number>
, or l .
for the current line:
(Pdb++) l .
1 from orm import Field, Model
2
3
4 def test_field_default_and_access():
5 -> breakpoint()
6 class TestModel(Model):
7 foo = Field(default="bar")
8
9 obj = TestModel()
10 assert obj.foo == "bar", "Field should return default value"
11 obj.foo = "baz"
To step over to the next line, use next
or just n
:
(Pdb++) n
[25] > /tmp/testp/test_orm.py(6)test_field_default_and_access()
-> class TestModel(Model):
5 frames hidden (try 'help hidden_frames')
(Pdb++) l .
1 from orm import Field, Model
2
3
4 def test_field_default_and_access():
5 breakpoint()
6 -> class TestModel(Model):
7 foo = Field(default="bar")
8
9 obj = TestModel()
10 assert obj.foo == "bar", "Field should return default value"
11 obj.foo = "baz"
As you can see, the arrow on the left shows that we moved from line 5 to 6.
To step in to the next frame, use step
or just s
. For example, if I’m
on line obj = TestModel()
, and send s
, this will take me into the
__init__
method:
[25] > /tmp/testp/test_orm.py(9)test_field_default_and_access()
-> obj = TestModel()
5 frames hidden (try 'help hidden_frames')
(Pdb++) s
--Call--
[26] > /tmp/testp/orm.py(32)__init__()
-> def __init__(self, **kwargs):
5 frames hidden (try 'help hidden_frames')
Had I sent s
instead, it would have moved to line 7 without taking me inside
the __init__
method.
To move back up, use up
or u
:
[26] > /tmp/testp/orm.py(32)__init__()
-> def __init__(self, **kwargs):
5 frames hidden (try 'help hidden_frames')
(Pdb++) u
[25] > /tmp/testp/test_orm.py(9)test_field_default_and_access()
-> obj = TestModel()
In some cases, you can’t use pdb because you don’t have the script running in the foreground, in that case, you can use a remote debugger like madbg.
def test_field_default_and_access():
import madbg; madbg.set_trace()
class TestModel(Model):
# ... rest of code ...
Then, running the test won’t drop you in an interactive shell but instead will print:
Waiting for debugger client on 127.0.0.1:3513
In another terminal, run madbg connect
to connect to the debugger remotely:
$ madbg connect
> /tmp/testp/test_orm.py(5)test_field_default_and_access()
3
4 def test_field_default_and_access():
----> 5 import madbg; madbg.set_trace()
6 class TestModel(Model):
7 foo = Field(default="bar")
5 frames hidden (try 'help hidden_frames')
ipdb>
Remote debugging is useful for non-interactive processes, like server applications or background tasks.
This works great in cases of Ansible Modules of course, but also if you want to debug a server process ie. running in systemd, to inspect some particulary case so that you can reproduce it in a test ;)
Namespace imports improve readability by avoiding long import lists and make it clear where functions originate.
Instead of:
from some_lib import (
Lets,
Do,
Some,
Very,
Long,
Imports,
Because,
Why,
Not,
And,
Then,
Dont,
Ever,
Maintain,
It,
)
I prefer:
import some_lib
some_lib.Lets()
some_lib.Maintain()
some_lib.It()
Making a Python package is super easy, first, find a package and copy and paste
it’s setup.py
xD
While the Python Packaging Authority recommands pyproejct.toml, I prefer setup.py which allows dynamic build logic, as we’ll see.
For example, your repository looks like this:
08/07 2025 15:20:47 jpic@jpic /tmp/your_pkg
$ find .
.
./README
./your_pkg
./your_pkg/foo.py
./setup.py
In your_pkg/foo.py
:
class Bar:
pass
And in setup.py
:
import os
from setuptools import setup
def read(fname):
return open(os.path.join(os.path.dirname(__file__), fname)).read()
setup(
name='your_pkg',
version='3.12.1',
description='Your package provides tomatoes',
author='Daniel Monbars',
author_email='daniel@example.com',
url='http://your.rtfd.org',
long_description=read('README'),
license='MIT',
install_requires=[
'django>=3.2',
],
extras_require={
'nested': ['django-nested-admin>=3.0.21'],
'tags': ['django-taggit'],
},
classifiers=[
'Development Status :: 5 - Production/Stable',
]
)
Install it in editable mode (with symlinks instead of file copies) with
-e
:
$ pip install -e .
Defaulting to user installation because normal site-packages is not writeable
Obtaining file:///tmp/your_pkg
Preparing metadata (setup.py) ... done
Requirement already satisfied: django>=3.2 in /home/jpic/.local/lib/python3.13/site-packages (from your_package==3.12.1) (5.1.7)
Requirement already satisfied: asgiref<4,>=3.8.1 in /usr/lib/python3.13/site-packages (from django>=3.2->your_package==3.12.1) (3.8.1)
Requirement already satisfied: sqlparse>=0.3.1 in /home/jpic/.local/lib/python3.13/site-packages (from django>=3.2->your_package==3.12.1) (0.5.3)
Installing collected packages: your_package
DEPRECATION: Legacy editable install of your_package==3.12.1 from file:///tmp/your_pkg (setup.py develop) is deprecated. pip 25.3 will enforce this behaviour change. A possible replacement is to add a pyproject.toml or enable --use-pep517, and use setuptools >= 64. If the resulting installation is not behaving as expected, try using --config-settings editable_mode=compat. Please consult the setuptools documentation for more information. Discussion can be found at https://github.com/pypa/pip/issues/11457
Running setup.py develop for your_package
Successfully installed your_package-3.12.1
You can now import it:
>>> from your_pkg import foo
>>> foo.Bar()
<your_pkg.foo.Bar object at 0x79fdba04ae40>
Optional dependencies allow users to install only what they need, reducing
bloat. The extra_require
section allows to declare optional dependencies:
extras_require={
'nested': ['django-nested-admin>=3.0.21'],
'tags': ['django-taggit'],
},
If you want to install your packages with the optionnal dependencies for tags, do:
pip install -e .[tags]
With both tags and nested:
pip install -e .[tags,nested]
We might also want to declare a full
tag:
extras_require = {
'nested': ['django-nested-admin>=3.0.21'],
'tags': ['django-taggit'],
}
extras_require['full'] = [
dep
for extra, deps in extras_require.items()
for dep in deps
] # Combines all optional dependencies
Which will leave you with:
>>> extras_require
{'nested': ['django-nested-admin>=3.0.21'], 'tags': ['django-taggit'], 'full': ['django-nested-admin>=3.0.21', 'django-taggit']}
That’s already a good reason to use setup.py
but wait there’s more!
setupmeta simplifies packaging by auto-detecting metadata like version from git tags or project files.
I fell in love with setupmeta back in
2019. Basically, it
allows stuff like version
to autodiscover based on git tags for example,
removing the need to have version bump commits. As you can see here for
example
Python entry points are a mechanism defined in a package’s setup.py or pyproject.toml to register functions, classes, or scripts that other packages or tools can discover and use at runtime. They’re part of the setuptools ecosystem, specified in the entry_points section, and are commonly used for plugins, CLI tools, or extensible systems.
The console_scripts
entrypoint allows to define a command, by adding this to
my setup()
call:
entry_points=dict(
console_scripts=[
'your_cmd = your_pkg.foo:your_cmd',
]
)
And of course defining such function:
$ cat your_pkg/foo.py
import sys
def your_cmd():
print('hello', sys.argv)
And re-running a pip install of course, we can call the command:
$ your_cmd foo bar
hello ['/home/jpic/.local/bin/your_cmd', 'foo', 'bar']
Note that various libraries are out there to help you:
You can define your own entry point names if you’re designing a plugin system. Also, you can plug your code to other’s codes using their entrypoint, for example, this is how chttpx (my ORM over REST-APIs) registers it’s pytest plugin:
entry_points={
'pytest11': [
'chttpx = chttpx.pytest',
],
]
Basically, libraries like cli2 use entrypoints to provide extensible CLI tools, while chttpx registers pytest plugins via entrypoints.
To build packages, use python -m build
, which builds a tarball in dist/
and then twine to upload it to
PyPi or something.
That’s it! Have a lot of fun!