Jan 16 21:10:09.188785 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 18:44:02 -00 2026 Jan 16 21:10:09.188837 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=e880b5400e832e1de59b993d9ba6b86a9089175f10b4985da8b7b47cc8c74099 Jan 16 21:10:09.188861 kernel: BIOS-provided physical RAM map: Jan 16 21:10:09.188874 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 16 21:10:09.188885 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 16 21:10:09.188895 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 16 21:10:09.188908 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 16 21:10:09.188928 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 16 21:10:09.188940 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 16 21:10:09.188951 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 16 21:10:09.188963 kernel: NX (Execute Disable) protection: active Jan 16 21:10:09.188977 kernel: APIC: Static calls initialized Jan 16 21:10:09.188988 kernel: SMBIOS 2.8 present. Jan 16 21:10:09.189000 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 16 21:10:09.189017 kernel: DMI: Memory slots populated: 1/1 Jan 16 21:10:09.189035 kernel: Hypervisor detected: KVM Jan 16 21:10:09.189064 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 16 21:10:09.189082 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 16 21:10:09.189099 kernel: kvm-clock: using sched offset of 4735195877 cycles Jan 16 21:10:09.189119 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 16 21:10:09.189135 kernel: tsc: Detected 1995.312 MHz processor Jan 16 21:10:09.189154 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 16 21:10:09.189173 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 16 21:10:09.189196 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 16 21:10:09.189215 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 16 21:10:09.189232 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 16 21:10:09.189250 kernel: ACPI: Early table checksum verification disabled Jan 16 21:10:09.189268 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 16 21:10:09.189288 kernel: ACPI: RSDT 0x000000007FFE19FD 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 21:10:09.189306 kernel: ACPI: FACP 0x000000007FFE17E1 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 21:10:09.189324 kernel: ACPI: DSDT 0x000000007FFE0040 0017A1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 21:10:09.189343 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 16 21:10:09.189355 kernel: ACPI: APIC 0x000000007FFE1855 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 21:10:09.189368 kernel: ACPI: HPET 0x000000007FFE18D5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 21:10:09.189380 kernel: ACPI: SRAT 0x000000007FFE190D 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 21:10:09.189393 kernel: ACPI: WAET 0x000000007FFE19D5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 21:10:09.189407 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe17e1-0x7ffe1854] Jan 16 21:10:09.189421 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe17e0] Jan 16 21:10:09.189437 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 16 21:10:09.189451 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe1855-0x7ffe18d4] Jan 16 21:10:09.189471 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe18d5-0x7ffe190c] Jan 16 21:10:09.189485 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe190d-0x7ffe19d4] Jan 16 21:10:09.189498 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe19d5-0x7ffe19fc] Jan 16 21:10:09.189516 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 16 21:10:09.189531 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 16 21:10:09.189545 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Jan 16 21:10:09.189557 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Jan 16 21:10:09.189571 kernel: Zone ranges: Jan 16 21:10:09.190226 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 16 21:10:09.190253 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 16 21:10:09.190268 kernel: Normal empty Jan 16 21:10:09.190283 kernel: Device empty Jan 16 21:10:09.190298 kernel: Movable zone start for each node Jan 16 21:10:09.190312 kernel: Early memory node ranges Jan 16 21:10:09.190325 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 16 21:10:09.190339 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 16 21:10:09.190354 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 16 21:10:09.190371 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 16 21:10:09.190385 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 16 21:10:09.190399 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 16 21:10:09.190423 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 16 21:10:09.190437 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 16 21:10:09.190455 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 16 21:10:09.190469 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 16 21:10:09.190487 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 16 21:10:09.190501 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 16 21:10:09.190521 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 16 21:10:09.190536 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 16 21:10:09.190550 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 16 21:10:09.190565 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 16 21:10:09.190596 kernel: TSC deadline timer available Jan 16 21:10:09.190615 kernel: CPU topo: Max. logical packages: 1 Jan 16 21:10:09.190628 kernel: CPU topo: Max. logical dies: 1 Jan 16 21:10:09.190641 kernel: CPU topo: Max. dies per package: 1 Jan 16 21:10:09.190654 kernel: CPU topo: Max. threads per core: 1 Jan 16 21:10:09.190667 kernel: CPU topo: Num. cores per package: 2 Jan 16 21:10:09.190679 kernel: CPU topo: Num. threads per package: 2 Jan 16 21:10:09.190692 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 16 21:10:09.190705 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 16 21:10:09.190722 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 16 21:10:09.190735 kernel: Booting paravirtualized kernel on KVM Jan 16 21:10:09.190749 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 16 21:10:09.190763 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 16 21:10:09.190776 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 16 21:10:09.190790 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 16 21:10:09.190804 kernel: pcpu-alloc: [0] 0 1 Jan 16 21:10:09.190821 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 16 21:10:09.190837 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=e880b5400e832e1de59b993d9ba6b86a9089175f10b4985da8b7b47cc8c74099 Jan 16 21:10:09.190851 kernel: random: crng init done Jan 16 21:10:09.190864 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 16 21:10:09.190878 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 16 21:10:09.190891 kernel: Fallback order for Node 0: 0 Jan 16 21:10:09.190904 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Jan 16 21:10:09.190922 kernel: Policy zone: DMA32 Jan 16 21:10:09.190936 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 16 21:10:09.190950 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 16 21:10:09.190963 kernel: Kernel/User page tables isolation: enabled Jan 16 21:10:09.190977 kernel: ftrace: allocating 40128 entries in 157 pages Jan 16 21:10:09.190991 kernel: ftrace: allocated 157 pages with 5 groups Jan 16 21:10:09.191005 kernel: Dynamic Preempt: voluntary Jan 16 21:10:09.191022 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 16 21:10:09.191038 kernel: rcu: RCU event tracing is enabled. Jan 16 21:10:09.191052 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 16 21:10:09.191066 kernel: Trampoline variant of Tasks RCU enabled. Jan 16 21:10:09.191080 kernel: Rude variant of Tasks RCU enabled. Jan 16 21:10:09.191094 kernel: Tracing variant of Tasks RCU enabled. Jan 16 21:10:09.191108 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 16 21:10:09.191122 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 16 21:10:09.191139 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 21:10:09.191160 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 21:10:09.191175 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 21:10:09.191189 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 16 21:10:09.191202 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 16 21:10:09.191216 kernel: Console: colour VGA+ 80x25 Jan 16 21:10:09.191229 kernel: printk: legacy console [tty0] enabled Jan 16 21:10:09.191246 kernel: printk: legacy console [ttyS0] enabled Jan 16 21:10:09.191260 kernel: ACPI: Core revision 20240827 Jan 16 21:10:09.191274 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 16 21:10:09.191299 kernel: APIC: Switch to symmetric I/O mode setup Jan 16 21:10:09.191315 kernel: x2apic enabled Jan 16 21:10:09.191329 kernel: APIC: Switched APIC routing to: physical x2apic Jan 16 21:10:09.191346 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 16 21:10:09.191366 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Jan 16 21:10:09.191393 kernel: Calibrating delay loop (skipped) preset value.. 3990.62 BogoMIPS (lpj=1995312) Jan 16 21:10:09.191417 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 16 21:10:09.191436 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 16 21:10:09.191451 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 16 21:10:09.191466 kernel: Spectre V2 : Mitigation: Retpolines Jan 16 21:10:09.191485 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 16 21:10:09.191500 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 16 21:10:09.191515 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 16 21:10:09.191531 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 16 21:10:09.191545 kernel: MDS: Mitigation: Clear CPU buffers Jan 16 21:10:09.191560 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 16 21:10:09.191575 kernel: active return thunk: its_return_thunk Jan 16 21:10:09.191610 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 16 21:10:09.191625 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 16 21:10:09.191640 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 16 21:10:09.191655 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 16 21:10:09.191670 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 16 21:10:09.191685 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 16 21:10:09.191701 kernel: Freeing SMP alternatives memory: 32K Jan 16 21:10:09.191718 kernel: pid_max: default: 32768 minimum: 301 Jan 16 21:10:09.191733 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 16 21:10:09.191747 kernel: landlock: Up and running. Jan 16 21:10:09.191762 kernel: SELinux: Initializing. Jan 16 21:10:09.191780 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 16 21:10:09.191799 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 16 21:10:09.191815 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 16 21:10:09.191836 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 16 21:10:09.191852 kernel: signal: max sigframe size: 1776 Jan 16 21:10:09.191872 kernel: rcu: Hierarchical SRCU implementation. Jan 16 21:10:09.191891 kernel: rcu: Max phase no-delay instances is 400. Jan 16 21:10:09.191906 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 16 21:10:09.191921 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 16 21:10:09.191935 kernel: smp: Bringing up secondary CPUs ... Jan 16 21:10:09.191959 kernel: smpboot: x86: Booting SMP configuration: Jan 16 21:10:09.191975 kernel: .... node #0, CPUs: #1 Jan 16 21:10:09.191990 kernel: smp: Brought up 1 node, 2 CPUs Jan 16 21:10:09.192004 kernel: smpboot: Total of 2 processors activated (7981.24 BogoMIPS) Jan 16 21:10:09.192020 kernel: Memory: 1983292K/2096612K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 108756K reserved, 0K cma-reserved) Jan 16 21:10:09.192036 kernel: devtmpfs: initialized Jan 16 21:10:09.192050 kernel: x86/mm: Memory block size: 128MB Jan 16 21:10:09.192080 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 16 21:10:09.192095 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 16 21:10:09.192110 kernel: pinctrl core: initialized pinctrl subsystem Jan 16 21:10:09.192125 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 16 21:10:09.192139 kernel: audit: initializing netlink subsys (disabled) Jan 16 21:10:09.192155 kernel: audit: type=2000 audit(1768597805.505:1): state=initialized audit_enabled=0 res=1 Jan 16 21:10:09.192171 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 16 21:10:09.192189 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 16 21:10:09.192203 kernel: cpuidle: using governor menu Jan 16 21:10:09.192218 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 16 21:10:09.192233 kernel: dca service started, version 1.12.1 Jan 16 21:10:09.192254 kernel: PCI: Using configuration type 1 for base access Jan 16 21:10:09.192270 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 16 21:10:09.192287 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 16 21:10:09.192306 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 16 21:10:09.192326 kernel: ACPI: Added _OSI(Module Device) Jan 16 21:10:09.192344 kernel: ACPI: Added _OSI(Processor Device) Jan 16 21:10:09.192360 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 16 21:10:09.192375 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 16 21:10:09.192390 kernel: ACPI: Interpreter enabled Jan 16 21:10:09.192404 kernel: ACPI: PM: (supports S0 S5) Jan 16 21:10:09.192419 kernel: ACPI: Using IOAPIC for interrupt routing Jan 16 21:10:09.192437 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 16 21:10:09.192452 kernel: PCI: Using E820 reservations for host bridge windows Jan 16 21:10:09.192467 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 16 21:10:09.192482 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 16 21:10:09.197970 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 16 21:10:09.198214 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 16 21:10:09.198445 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 16 21:10:09.198467 kernel: acpiphp: Slot [3] registered Jan 16 21:10:09.198484 kernel: acpiphp: Slot [4] registered Jan 16 21:10:09.198500 kernel: acpiphp: Slot [5] registered Jan 16 21:10:09.198515 kernel: acpiphp: Slot [6] registered Jan 16 21:10:09.198531 kernel: acpiphp: Slot [7] registered Jan 16 21:10:09.198551 kernel: acpiphp: Slot [8] registered Jan 16 21:10:09.198566 kernel: acpiphp: Slot [9] registered Jan 16 21:10:09.198603 kernel: acpiphp: Slot [10] registered Jan 16 21:10:09.198617 kernel: acpiphp: Slot [11] registered Jan 16 21:10:09.198631 kernel: acpiphp: Slot [12] registered Jan 16 21:10:09.198645 kernel: acpiphp: Slot [13] registered Jan 16 21:10:09.198659 kernel: acpiphp: Slot [14] registered Jan 16 21:10:09.198672 kernel: acpiphp: Slot [15] registered Jan 16 21:10:09.198690 kernel: acpiphp: Slot [16] registered Jan 16 21:10:09.198703 kernel: acpiphp: Slot [17] registered Jan 16 21:10:09.198717 kernel: acpiphp: Slot [18] registered Jan 16 21:10:09.198732 kernel: acpiphp: Slot [19] registered Jan 16 21:10:09.198748 kernel: acpiphp: Slot [20] registered Jan 16 21:10:09.198764 kernel: acpiphp: Slot [21] registered Jan 16 21:10:09.198781 kernel: acpiphp: Slot [22] registered Jan 16 21:10:09.198800 kernel: acpiphp: Slot [23] registered Jan 16 21:10:09.198815 kernel: acpiphp: Slot [24] registered Jan 16 21:10:09.198830 kernel: acpiphp: Slot [25] registered Jan 16 21:10:09.198844 kernel: acpiphp: Slot [26] registered Jan 16 21:10:09.198858 kernel: acpiphp: Slot [27] registered Jan 16 21:10:09.198874 kernel: acpiphp: Slot [28] registered Jan 16 21:10:09.198888 kernel: acpiphp: Slot [29] registered Jan 16 21:10:09.198903 kernel: acpiphp: Slot [30] registered Jan 16 21:10:09.198924 kernel: acpiphp: Slot [31] registered Jan 16 21:10:09.198941 kernel: PCI host bridge to bus 0000:00 Jan 16 21:10:09.199177 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 16 21:10:09.199362 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 16 21:10:09.199543 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 16 21:10:09.199757 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 16 21:10:09.199944 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 16 21:10:09.200140 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 16 21:10:09.200375 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jan 16 21:10:09.200673 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jan 16 21:10:09.200926 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Jan 16 21:10:09.201164 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Jan 16 21:10:09.201376 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jan 16 21:10:09.204336 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jan 16 21:10:09.204617 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jan 16 21:10:09.204819 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Jan 16 21:10:09.205041 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Jan 16 21:10:09.205256 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Jan 16 21:10:09.205475 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jan 16 21:10:09.205708 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 16 21:10:09.205918 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 16 21:10:09.206140 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jan 16 21:10:09.206357 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Jan 16 21:10:09.206566 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Jan 16 21:10:09.209168 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Jan 16 21:10:09.212782 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Jan 16 21:10:09.213040 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 16 21:10:09.213270 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 16 21:10:09.213471 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Jan 16 21:10:09.213697 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Jan 16 21:10:09.213901 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Jan 16 21:10:09.214121 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 16 21:10:09.214334 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Jan 16 21:10:09.214550 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Jan 16 21:10:09.216848 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 16 21:10:09.217092 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jan 16 21:10:09.217306 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Jan 16 21:10:09.217513 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Jan 16 21:10:09.217739 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 16 21:10:09.217975 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 16 21:10:09.218176 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Jan 16 21:10:09.218374 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Jan 16 21:10:09.218572 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Jan 16 21:10:09.220914 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 16 21:10:09.221129 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Jan 16 21:10:09.221345 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Jan 16 21:10:09.221545 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Jan 16 21:10:09.221789 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Jan 16 21:10:09.221993 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Jan 16 21:10:09.222201 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 16 21:10:09.222221 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 16 21:10:09.222238 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 16 21:10:09.222255 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 16 21:10:09.222271 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 16 21:10:09.222288 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 16 21:10:09.222303 kernel: iommu: Default domain type: Translated Jan 16 21:10:09.222322 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 16 21:10:09.222338 kernel: PCI: Using ACPI for IRQ routing Jan 16 21:10:09.222355 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 16 21:10:09.222371 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 16 21:10:09.222388 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 16 21:10:09.224650 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 16 21:10:09.224922 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 16 21:10:09.225134 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 16 21:10:09.225156 kernel: vgaarb: loaded Jan 16 21:10:09.225175 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 16 21:10:09.225192 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 16 21:10:09.225211 kernel: clocksource: Switched to clocksource kvm-clock Jan 16 21:10:09.225228 kernel: VFS: Disk quotas dquot_6.6.0 Jan 16 21:10:09.225246 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 16 21:10:09.225265 kernel: pnp: PnP ACPI init Jan 16 21:10:09.225286 kernel: pnp: PnP ACPI: found 4 devices Jan 16 21:10:09.225304 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 16 21:10:09.225321 kernel: NET: Registered PF_INET protocol family Jan 16 21:10:09.225339 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 16 21:10:09.225357 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 16 21:10:09.225375 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 16 21:10:09.225393 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 16 21:10:09.225413 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 16 21:10:09.225431 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 16 21:10:09.225449 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 16 21:10:09.225467 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 16 21:10:09.225484 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 16 21:10:09.225502 kernel: NET: Registered PF_XDP protocol family Jan 16 21:10:09.225748 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 16 21:10:09.225941 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 16 21:10:09.226121 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 16 21:10:09.226301 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 16 21:10:09.226497 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 16 21:10:09.226726 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 16 21:10:09.226926 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 16 21:10:09.226957 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 16 21:10:09.227160 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 32580 usecs Jan 16 21:10:09.227183 kernel: PCI: CLS 0 bytes, default 64 Jan 16 21:10:09.227200 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 16 21:10:09.227218 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Jan 16 21:10:09.227233 kernel: Initialise system trusted keyrings Jan 16 21:10:09.227251 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 16 21:10:09.227273 kernel: Key type asymmetric registered Jan 16 21:10:09.227290 kernel: Asymmetric key parser 'x509' registered Jan 16 21:10:09.227306 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 16 21:10:09.227323 kernel: io scheduler mq-deadline registered Jan 16 21:10:09.227342 kernel: io scheduler kyber registered Jan 16 21:10:09.227359 kernel: io scheduler bfq registered Jan 16 21:10:09.227376 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 16 21:10:09.227395 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 16 21:10:09.227410 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 16 21:10:09.227427 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 16 21:10:09.227443 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 16 21:10:09.227461 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 16 21:10:09.227478 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 16 21:10:09.227494 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 16 21:10:09.227514 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 16 21:10:09.227532 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 16 21:10:09.229850 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 16 21:10:09.230061 kernel: rtc_cmos 00:03: registered as rtc0 Jan 16 21:10:09.230254 kernel: rtc_cmos 00:03: setting system clock to 2026-01-16T21:10:07 UTC (1768597807) Jan 16 21:10:09.230442 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 16 21:10:09.230470 kernel: intel_pstate: CPU model not supported Jan 16 21:10:09.230487 kernel: NET: Registered PF_INET6 protocol family Jan 16 21:10:09.230504 kernel: Segment Routing with IPv6 Jan 16 21:10:09.230522 kernel: In-situ OAM (IOAM) with IPv6 Jan 16 21:10:09.230539 kernel: NET: Registered PF_PACKET protocol family Jan 16 21:10:09.230555 kernel: Key type dns_resolver registered Jan 16 21:10:09.230571 kernel: IPI shorthand broadcast: enabled Jan 16 21:10:09.230614 kernel: sched_clock: Marking stable (2460005529, 265155102)->(2794699692, -69539061) Jan 16 21:10:09.230631 kernel: registered taskstats version 1 Jan 16 21:10:09.230647 kernel: Loading compiled-in X.509 certificates Jan 16 21:10:09.230662 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: a9591db9912320a48a0589d0293fff3e535b90df' Jan 16 21:10:09.230678 kernel: Demotion targets for Node 0: null Jan 16 21:10:09.230693 kernel: Key type .fscrypt registered Jan 16 21:10:09.230709 kernel: Key type fscrypt-provisioning registered Jan 16 21:10:09.230746 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 16 21:10:09.230765 kernel: ima: Allocated hash algorithm: sha1 Jan 16 21:10:09.230782 kernel: ima: No architecture policies found Jan 16 21:10:09.230798 kernel: clk: Disabling unused clocks Jan 16 21:10:09.230814 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 16 21:10:09.230831 kernel: Write protecting the kernel read-only data: 47104k Jan 16 21:10:09.230848 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 16 21:10:09.230865 kernel: Run /init as init process Jan 16 21:10:09.230885 kernel: with arguments: Jan 16 21:10:09.230901 kernel: /init Jan 16 21:10:09.230917 kernel: with environment: Jan 16 21:10:09.230935 kernel: HOME=/ Jan 16 21:10:09.230951 kernel: TERM=linux Jan 16 21:10:09.230967 kernel: SCSI subsystem initialized Jan 16 21:10:09.230985 kernel: libata version 3.00 loaded. Jan 16 21:10:09.231213 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 16 21:10:09.231446 kernel: scsi host0: ata_piix Jan 16 21:10:09.233782 kernel: scsi host1: ata_piix Jan 16 21:10:09.233831 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Jan 16 21:10:09.233853 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Jan 16 21:10:09.233882 kernel: ACPI: bus type USB registered Jan 16 21:10:09.233903 kernel: usbcore: registered new interface driver usbfs Jan 16 21:10:09.233924 kernel: usbcore: registered new interface driver hub Jan 16 21:10:09.233945 kernel: usbcore: registered new device driver usb Jan 16 21:10:09.234241 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 16 21:10:09.234505 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 16 21:10:09.234744 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 16 21:10:09.235020 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 16 21:10:09.235341 kernel: hub 1-0:1.0: USB hub found Jan 16 21:10:09.237680 kernel: hub 1-0:1.0: 2 ports detected Jan 16 21:10:09.238013 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 16 21:10:09.238282 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 16 21:10:09.238309 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 16 21:10:09.238331 kernel: GPT:16515071 != 125829119 Jan 16 21:10:09.238352 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 16 21:10:09.238371 kernel: GPT:16515071 != 125829119 Jan 16 21:10:09.238392 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 16 21:10:09.238416 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 21:10:09.238722 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 16 21:10:09.238985 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Jan 16 21:10:09.239258 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Jan 16 21:10:09.239544 kernel: scsi host2: Virtio SCSI HBA Jan 16 21:10:09.239574 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 16 21:10:09.241642 kernel: device-mapper: uevent: version 1.0.3 Jan 16 21:10:09.241667 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 16 21:10:09.241689 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 16 21:10:09.241710 kernel: raid6: avx2x4 gen() 25647 MB/s Jan 16 21:10:09.241731 kernel: raid6: avx2x2 gen() 19848 MB/s Jan 16 21:10:09.241751 kernel: raid6: avx2x1 gen() 18569 MB/s Jan 16 21:10:09.241780 kernel: raid6: using algorithm avx2x4 gen() 25647 MB/s Jan 16 21:10:09.241801 kernel: raid6: .... xor() 8923 MB/s, rmw enabled Jan 16 21:10:09.241825 kernel: raid6: using avx2x2 recovery algorithm Jan 16 21:10:09.241846 kernel: xor: automatically using best checksumming function avx Jan 16 21:10:09.241866 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 16 21:10:09.241887 kernel: BTRFS: device fsid a5f82c06-1ff1-43b3-a650-214802f1359b devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (162) Jan 16 21:10:09.241908 kernel: BTRFS info (device dm-0): first mount of filesystem a5f82c06-1ff1-43b3-a650-214802f1359b Jan 16 21:10:09.241934 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 16 21:10:09.241954 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 16 21:10:09.241974 kernel: BTRFS info (device dm-0): enabling free space tree Jan 16 21:10:09.241995 kernel: loop: module loaded Jan 16 21:10:09.242016 kernel: loop0: detected capacity change from 0 to 100536 Jan 16 21:10:09.242040 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 16 21:10:09.242064 systemd[1]: Successfully made /usr/ read-only. Jan 16 21:10:09.242096 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 16 21:10:09.242118 systemd[1]: Detected virtualization kvm. Jan 16 21:10:09.242139 systemd[1]: Detected architecture x86-64. Jan 16 21:10:09.242161 systemd[1]: Running in initrd. Jan 16 21:10:09.242180 systemd[1]: No hostname configured, using default hostname. Jan 16 21:10:09.242203 systemd[1]: Hostname set to . Jan 16 21:10:09.242229 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 16 21:10:09.242251 systemd[1]: Queued start job for default target initrd.target. Jan 16 21:10:09.242272 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 16 21:10:09.242293 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 21:10:09.242312 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 21:10:09.242330 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 16 21:10:09.242355 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 21:10:09.242378 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 16 21:10:09.242400 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 16 21:10:09.242422 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 21:10:09.242444 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 21:10:09.242465 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 16 21:10:09.242490 systemd[1]: Reached target paths.target - Path Units. Jan 16 21:10:09.242512 systemd[1]: Reached target slices.target - Slice Units. Jan 16 21:10:09.242533 systemd[1]: Reached target swap.target - Swaps. Jan 16 21:10:09.242555 systemd[1]: Reached target timers.target - Timer Units. Jan 16 21:10:09.242577 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 21:10:09.242614 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 21:10:09.242629 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 16 21:10:09.242648 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 21:10:09.242664 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 16 21:10:09.242681 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 21:10:09.242696 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 21:10:09.242714 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 21:10:09.242730 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 21:10:09.242746 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 16 21:10:09.242764 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 16 21:10:09.242780 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 21:10:09.242796 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 16 21:10:09.242814 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 16 21:10:09.242831 systemd[1]: Starting systemd-fsck-usr.service... Jan 16 21:10:09.242849 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 21:10:09.242868 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 21:10:09.242885 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 21:10:09.242899 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 16 21:10:09.242917 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 21:10:09.242996 systemd-journald[298]: Collecting audit messages is enabled. Jan 16 21:10:09.243035 systemd[1]: Finished systemd-fsck-usr.service. Jan 16 21:10:09.243053 kernel: audit: type=1130 audit(1768597809.189:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:09.243076 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 21:10:09.243096 systemd-journald[298]: Journal started Jan 16 21:10:09.244629 systemd-journald[298]: Runtime Journal (/run/log/journal/9c46a9f520b84325a3b14c5f90659696) is 4.8M, max 39.1M, 34.2M free. Jan 16 21:10:09.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:09.253282 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 21:10:09.267616 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 16 21:10:09.272357 systemd-modules-load[300]: Inserted module 'br_netfilter' Jan 16 21:10:09.348134 kernel: Bridge firewalling registered Jan 16 21:10:09.348208 kernel: audit: type=1130 audit(1768597809.340:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:09.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:09.342166 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 21:10:09.356541 kernel: audit: type=1130 audit(1768597809.348:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:09.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:09.349542 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 21:10:09.365187 kernel: audit: type=1130 audit(1768597809.356:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:09.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:09.359492 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 21:10:09.375095 kernel: audit: type=1130 audit(1768597809.365:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:09.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:09.373625 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 21:10:09.378875 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 21:10:09.384849 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 21:10:09.389797 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 21:10:09.416649 kernel: audit: type=1130 audit(1768597809.409:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:09.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:09.409298 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 21:10:09.420456 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 21:10:09.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:09.426743 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 21:10:09.438251 kernel: audit: type=1130 audit(1768597809.420:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:09.438295 kernel: audit: type=1334 audit(1768597809.423:9): prog-id=6 op=LOAD Jan 16 21:10:09.423000 audit: BPF prog-id=6 op=LOAD Jan 16 21:10:09.428360 systemd-tmpfiles[317]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 16 21:10:09.438753 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 21:10:09.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:09.451623 kernel: audit: type=1130 audit(1768597809.442:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:09.457536 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 21:10:09.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:09.460822 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 16 21:10:09.490232 dracut-cmdline[337]: dracut-109 Jan 16 21:10:09.498616 dracut-cmdline[337]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=e880b5400e832e1de59b993d9ba6b86a9089175f10b4985da8b7b47cc8c74099 Jan 16 21:10:09.527614 systemd-resolved[330]: Positive Trust Anchors: Jan 16 21:10:09.528721 systemd-resolved[330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 21:10:09.528729 systemd-resolved[330]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 16 21:10:09.528766 systemd-resolved[330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 21:10:09.577280 systemd-resolved[330]: Defaulting to hostname 'linux'. Jan 16 21:10:09.580205 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 21:10:09.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:09.581137 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 21:10:09.639621 kernel: Loading iSCSI transport class v2.0-870. Jan 16 21:10:09.660617 kernel: iscsi: registered transport (tcp) Jan 16 21:10:09.691911 kernel: iscsi: registered transport (qla4xxx) Jan 16 21:10:09.692018 kernel: QLogic iSCSI HBA Driver Jan 16 21:10:09.730966 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 21:10:09.766270 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 21:10:09.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:09.769903 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 21:10:09.839304 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 16 21:10:09.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:09.843869 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 16 21:10:09.846793 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 16 21:10:09.895117 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 16 21:10:09.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:09.897000 audit: BPF prog-id=7 op=LOAD Jan 16 21:10:09.897000 audit: BPF prog-id=8 op=LOAD Jan 16 21:10:09.899108 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 21:10:09.936737 systemd-udevd[564]: Using default interface naming scheme 'v257'. Jan 16 21:10:09.953103 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 21:10:09.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:09.957078 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 16 21:10:09.997715 dracut-pre-trigger[627]: rd.md=0: removing MD RAID activation Jan 16 21:10:10.016413 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 21:10:10.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:10.018000 audit: BPF prog-id=9 op=LOAD Jan 16 21:10:10.020218 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 21:10:10.048988 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 21:10:10.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:10.053010 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 21:10:10.090648 systemd-networkd[688]: lo: Link UP Jan 16 21:10:10.090661 systemd-networkd[688]: lo: Gained carrier Jan 16 21:10:10.093474 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 21:10:10.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:10.095755 systemd[1]: Reached target network.target - Network. Jan 16 21:10:10.173619 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 21:10:10.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:10.178794 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 16 21:10:10.323639 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 16 21:10:10.338488 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 16 21:10:10.349982 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 16 21:10:10.361984 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 16 21:10:10.365837 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 16 21:10:10.390767 disk-uuid[735]: Primary Header is updated. Jan 16 21:10:10.390767 disk-uuid[735]: Secondary Entries is updated. Jan 16 21:10:10.390767 disk-uuid[735]: Secondary Header is updated. Jan 16 21:10:10.407196 kernel: cryptd: max_cpu_qlen set to 1000 Jan 16 21:10:10.518042 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 21:10:10.519802 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 21:10:10.535760 kernel: AES CTR mode by8 optimization enabled Jan 16 21:10:10.535793 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 16 21:10:10.535807 kernel: audit: type=1131 audit(1768597810.523:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:10.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:10.524252 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 21:10:10.541956 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 21:10:10.565427 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 16 21:10:10.639662 systemd-networkd[688]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Jan 16 21:10:10.639777 systemd-networkd[688]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 16 21:10:10.643999 systemd-networkd[688]: eth0: Link UP Jan 16 21:10:10.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:10.645726 systemd-networkd[688]: eth0: Gained carrier Jan 16 21:10:10.761278 kernel: audit: type=1130 audit(1768597810.752:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:10.645745 systemd-networkd[688]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Jan 16 21:10:10.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:10.769727 kernel: audit: type=1130 audit(1768597810.761:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:10.651983 systemd-networkd[688]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 21:10:10.651991 systemd-networkd[688]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 21:10:10.655459 systemd-networkd[688]: eth1: Link UP Jan 16 21:10:10.655877 systemd-networkd[688]: eth1: Gained carrier Jan 16 21:10:10.655899 systemd-networkd[688]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 21:10:10.668743 systemd-networkd[688]: eth0: DHCPv4 address 137.184.190.135/20, gateway 137.184.176.1 acquired from 169.254.169.253 Jan 16 21:10:10.680846 systemd-networkd[688]: eth1: DHCPv4 address 10.124.0.46/20 acquired from 169.254.169.253 Jan 16 21:10:10.752105 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 16 21:10:10.760878 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 21:10:10.769862 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 21:10:10.771853 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 21:10:10.773704 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 21:10:10.776768 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 16 21:10:10.803246 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 16 21:10:10.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:10.811707 kernel: audit: type=1130 audit(1768597810.804:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:11.486680 disk-uuid[736]: Warning: The kernel is still using the old partition table. Jan 16 21:10:11.486680 disk-uuid[736]: The new table will be used at the next reboot or after you Jan 16 21:10:11.486680 disk-uuid[736]: run partprobe(8) or kpartx(8) Jan 16 21:10:11.486680 disk-uuid[736]: The operation has completed successfully. Jan 16 21:10:11.493265 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 16 21:10:11.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:11.493431 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 16 21:10:11.508175 kernel: audit: type=1130 audit(1768597811.493:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:11.508219 kernel: audit: type=1131 audit(1768597811.493:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:11.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:11.497814 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 16 21:10:11.546617 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (823) Jan 16 21:10:11.550875 kernel: BTRFS info (device vda6): first mount of filesystem 984b7cbf-e15c-4ac8-8ab0-1fb2c55516eb Jan 16 21:10:11.553633 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 21:10:11.560752 kernel: BTRFS info (device vda6): turning on async discard Jan 16 21:10:11.560862 kernel: BTRFS info (device vda6): enabling free space tree Jan 16 21:10:11.572639 kernel: BTRFS info (device vda6): last unmount of filesystem 984b7cbf-e15c-4ac8-8ab0-1fb2c55516eb Jan 16 21:10:11.574203 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 16 21:10:11.581965 kernel: audit: type=1130 audit(1768597811.574:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:11.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:11.576873 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 16 21:10:11.869699 ignition[842]: Ignition 2.24.0 Jan 16 21:10:11.869715 ignition[842]: Stage: fetch-offline Jan 16 21:10:11.869776 ignition[842]: no configs at "/usr/lib/ignition/base.d" Jan 16 21:10:11.869790 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 21:10:11.869913 ignition[842]: parsed url from cmdline: "" Jan 16 21:10:11.875101 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 21:10:11.889093 kernel: audit: type=1130 audit(1768597811.880:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:11.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:11.869917 ignition[842]: no config URL provided Jan 16 21:10:11.883774 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 16 21:10:11.869923 ignition[842]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 21:10:11.869933 ignition[842]: no config at "/usr/lib/ignition/user.ign" Jan 16 21:10:11.869939 ignition[842]: failed to fetch config: resource requires networking Jan 16 21:10:11.871054 ignition[842]: Ignition finished successfully Jan 16 21:10:11.911849 systemd-networkd[688]: eth1: Gained IPv6LL Jan 16 21:10:11.930359 ignition[852]: Ignition 2.24.0 Jan 16 21:10:11.930383 ignition[852]: Stage: fetch Jan 16 21:10:11.930782 ignition[852]: no configs at "/usr/lib/ignition/base.d" Jan 16 21:10:11.930798 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 21:10:11.931016 ignition[852]: parsed url from cmdline: "" Jan 16 21:10:11.931022 ignition[852]: no config URL provided Jan 16 21:10:11.931047 ignition[852]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 21:10:11.931060 ignition[852]: no config at "/usr/lib/ignition/user.ign" Jan 16 21:10:11.931128 ignition[852]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 16 21:10:11.947713 ignition[852]: GET result: OK Jan 16 21:10:11.948166 ignition[852]: parsing config with SHA512: f5371c9b77edd2d3e87294498740871a371471aa7d44f1c747a957fe181107da38fb6e05f3da605bfb7dcf4f9b3927977b2714fbadae16a91c78d1d4fc455ecd Jan 16 21:10:11.959841 unknown[852]: fetched base config from "system" Jan 16 21:10:11.959855 unknown[852]: fetched base config from "system" Jan 16 21:10:11.960407 ignition[852]: fetch: fetch complete Jan 16 21:10:11.959863 unknown[852]: fetched user config from "digitalocean" Jan 16 21:10:11.960413 ignition[852]: fetch: fetch passed Jan 16 21:10:11.960474 ignition[852]: Ignition finished successfully Jan 16 21:10:11.964038 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 16 21:10:11.973551 kernel: audit: type=1130 audit(1768597811.964:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:11.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:11.967980 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 16 21:10:12.002418 ignition[858]: Ignition 2.24.0 Jan 16 21:10:12.002437 ignition[858]: Stage: kargs Jan 16 21:10:12.002696 ignition[858]: no configs at "/usr/lib/ignition/base.d" Jan 16 21:10:12.002709 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 21:10:12.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:12.017606 kernel: audit: type=1130 audit(1768597812.008:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:12.008427 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 16 21:10:12.004137 ignition[858]: kargs: kargs passed Jan 16 21:10:12.014106 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 16 21:10:12.004220 ignition[858]: Ignition finished successfully Jan 16 21:10:12.066844 ignition[865]: Ignition 2.24.0 Jan 16 21:10:12.066862 ignition[865]: Stage: disks Jan 16 21:10:12.067534 ignition[865]: no configs at "/usr/lib/ignition/base.d" Jan 16 21:10:12.067553 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 21:10:12.073505 ignition[865]: disks: disks passed Jan 16 21:10:12.074615 ignition[865]: Ignition finished successfully Jan 16 21:10:12.077447 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 16 21:10:12.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:12.079336 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 16 21:10:12.081343 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 21:10:12.082496 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 21:10:12.084296 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 21:10:12.085897 systemd[1]: Reached target basic.target - Basic System. Jan 16 21:10:12.089572 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 16 21:10:12.103898 systemd-networkd[688]: eth0: Gained IPv6LL Jan 16 21:10:12.139200 systemd-fsck[873]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 16 21:10:12.143742 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 16 21:10:12.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:12.151772 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 16 21:10:12.312642 kernel: EXT4-fs (vda9): mounted filesystem ec5ae8d3-548b-4a34-bd68-b1a953fcffb6 r/w with ordered data mode. Quota mode: none. Jan 16 21:10:12.313357 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 16 21:10:12.316346 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 16 21:10:12.321395 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 21:10:12.325117 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 16 21:10:12.331913 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Jan 16 21:10:12.341778 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 16 21:10:12.343952 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 16 21:10:12.344033 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 21:10:12.370026 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (881) Jan 16 21:10:12.370131 kernel: BTRFS info (device vda6): first mount of filesystem 984b7cbf-e15c-4ac8-8ab0-1fb2c55516eb Jan 16 21:10:12.375187 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 21:10:12.379717 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 16 21:10:12.386284 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 16 21:10:12.417103 kernel: BTRFS info (device vda6): turning on async discard Jan 16 21:10:12.417196 kernel: BTRFS info (device vda6): enabling free space tree Jan 16 21:10:12.426834 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 21:10:12.477425 coreos-metadata[883]: Jan 16 21:10:12.477 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 21:10:12.496046 coreos-metadata[883]: Jan 16 21:10:12.494 INFO Fetch successful Jan 16 21:10:12.498081 coreos-metadata[884]: Jan 16 21:10:12.497 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 21:10:12.508832 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Jan 16 21:10:12.510190 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Jan 16 21:10:12.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-afterburn-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:12.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-afterburn-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:12.514226 coreos-metadata[884]: Jan 16 21:10:12.513 INFO Fetch successful Jan 16 21:10:12.521575 coreos-metadata[884]: Jan 16 21:10:12.521 INFO wrote hostname ci-4580.0.0-p-735bf5553b to /sysroot/etc/hostname Jan 16 21:10:12.523143 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 21:10:12.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:12.662174 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 16 21:10:12.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:12.666351 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 16 21:10:12.670883 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 16 21:10:12.697073 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 16 21:10:12.700009 kernel: BTRFS info (device vda6): last unmount of filesystem 984b7cbf-e15c-4ac8-8ab0-1fb2c55516eb Jan 16 21:10:12.723668 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 16 21:10:12.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:12.740031 ignition[989]: INFO : Ignition 2.24.0 Jan 16 21:10:12.740031 ignition[989]: INFO : Stage: mount Jan 16 21:10:12.742552 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 21:10:12.742552 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 21:10:12.745054 ignition[989]: INFO : mount: mount passed Jan 16 21:10:12.745054 ignition[989]: INFO : Ignition finished successfully Jan 16 21:10:12.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:12.745291 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 16 21:10:12.749489 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 16 21:10:12.778242 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 21:10:12.806902 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1002) Jan 16 21:10:12.806965 kernel: BTRFS info (device vda6): first mount of filesystem 984b7cbf-e15c-4ac8-8ab0-1fb2c55516eb Jan 16 21:10:12.809633 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 21:10:12.815966 kernel: BTRFS info (device vda6): turning on async discard Jan 16 21:10:12.816050 kernel: BTRFS info (device vda6): enabling free space tree Jan 16 21:10:12.820004 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 21:10:12.877675 ignition[1019]: INFO : Ignition 2.24.0 Jan 16 21:10:12.877675 ignition[1019]: INFO : Stage: files Jan 16 21:10:12.877675 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 21:10:12.877675 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 21:10:12.882082 ignition[1019]: DEBUG : files: compiled without relabeling support, skipping Jan 16 21:10:12.882082 ignition[1019]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 16 21:10:12.882082 ignition[1019]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 16 21:10:12.886188 ignition[1019]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 16 21:10:12.886188 ignition[1019]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 16 21:10:12.890052 ignition[1019]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 16 21:10:12.890052 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 16 21:10:12.890052 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 16 21:10:12.886386 unknown[1019]: wrote ssh authorized keys file for user: core Jan 16 21:10:12.931205 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 16 21:10:12.992422 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 16 21:10:12.992422 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 16 21:10:12.996547 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 16 21:10:12.996547 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 16 21:10:12.996547 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 16 21:10:12.996547 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 21:10:12.996547 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 21:10:12.996547 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 21:10:12.996547 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 21:10:13.005922 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 21:10:13.005922 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 21:10:13.005922 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 16 21:10:13.005922 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 16 21:10:13.005922 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 16 21:10:13.005922 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 16 21:10:23.017170 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET error: Get "https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw": net/http: timeout awaiting response headers Jan 16 21:10:23.217605 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #2 Jan 16 21:10:23.640431 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 16 21:10:24.049350 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 16 21:10:24.051880 ignition[1019]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 16 21:10:24.054237 ignition[1019]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 21:10:24.057050 ignition[1019]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 21:10:24.059353 ignition[1019]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 16 21:10:24.059353 ignition[1019]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 16 21:10:24.059353 ignition[1019]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 16 21:10:24.059353 ignition[1019]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 16 21:10:24.059353 ignition[1019]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 16 21:10:24.059353 ignition[1019]: INFO : files: files passed Jan 16 21:10:24.078370 kernel: kauditd_printk_skb: 8 callbacks suppressed Jan 16 21:10:24.078414 kernel: audit: type=1130 audit(1768597824.064:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.078508 ignition[1019]: INFO : Ignition finished successfully Jan 16 21:10:24.061770 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 16 21:10:24.075418 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 16 21:10:24.080836 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 16 21:10:24.096829 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 16 21:10:24.096995 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 16 21:10:24.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.107620 kernel: audit: type=1130 audit(1768597824.099:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.107714 kernel: audit: type=1131 audit(1768597824.099:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.118132 initrd-setup-root-after-ignition[1055]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 21:10:24.119803 initrd-setup-root-after-ignition[1051]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 21:10:24.119803 initrd-setup-root-after-ignition[1051]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 16 21:10:24.121479 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 21:10:24.130242 kernel: audit: type=1130 audit(1768597824.122:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.124117 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 16 21:10:24.132511 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 16 21:10:24.197161 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 16 21:10:24.197361 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 16 21:10:24.211643 kernel: audit: type=1130 audit(1768597824.198:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.211682 kernel: audit: type=1131 audit(1768597824.198:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.199409 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 16 21:10:24.212558 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 16 21:10:24.214387 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 16 21:10:24.215640 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 16 21:10:24.247066 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 21:10:24.254656 kernel: audit: type=1130 audit(1768597824.247:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.250791 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 16 21:10:24.279027 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 16 21:10:24.279387 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 16 21:10:24.281318 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 21:10:24.283064 systemd[1]: Stopped target timers.target - Timer Units. Jan 16 21:10:24.284603 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 16 21:10:24.299310 kernel: audit: type=1131 audit(1768597824.285:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.284827 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 21:10:24.292306 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 16 21:10:24.300163 systemd[1]: Stopped target basic.target - Basic System. Jan 16 21:10:24.301713 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 16 21:10:24.303055 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 21:10:24.304696 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 16 21:10:24.307118 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 16 21:10:24.308796 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 16 21:10:24.310218 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 21:10:24.311830 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 16 21:10:24.313375 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 16 21:10:24.314993 systemd[1]: Stopped target swap.target - Swaps. Jan 16 21:10:24.316354 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 16 21:10:24.323814 kernel: audit: type=1131 audit(1768597824.317:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.316620 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 16 21:10:24.324013 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 16 21:10:24.325810 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 21:10:24.327381 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 16 21:10:24.327514 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 21:10:24.337318 kernel: audit: type=1131 audit(1768597824.330:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.329099 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 16 21:10:24.329288 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 16 21:10:24.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.337336 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 16 21:10:24.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.337658 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 21:10:24.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.339577 systemd[1]: ignition-files.service: Deactivated successfully. Jan 16 21:10:24.339822 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 16 21:10:24.341316 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 16 21:10:24.341475 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 21:10:24.345822 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 16 21:10:24.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.347329 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 16 21:10:24.347553 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 21:10:24.352962 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 16 21:10:24.354833 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 16 21:10:24.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.354998 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 21:10:24.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.356779 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 16 21:10:24.357013 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 21:10:24.358448 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 16 21:10:24.360733 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 21:10:24.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.371667 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 16 21:10:24.374727 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 16 21:10:24.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.382761 ignition[1075]: INFO : Ignition 2.24.0 Jan 16 21:10:24.382761 ignition[1075]: INFO : Stage: umount Jan 16 21:10:24.382761 ignition[1075]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 21:10:24.382761 ignition[1075]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 21:10:24.389059 ignition[1075]: INFO : umount: umount passed Jan 16 21:10:24.389059 ignition[1075]: INFO : Ignition finished successfully Jan 16 21:10:24.389834 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 16 21:10:24.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.390678 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 16 21:10:24.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.394478 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 16 21:10:24.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.394551 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 16 21:10:24.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.395495 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 16 21:10:24.395622 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 16 21:10:24.397796 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 16 21:10:24.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.397860 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 16 21:10:24.400130 systemd[1]: Stopped target network.target - Network. Jan 16 21:10:24.401641 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 16 21:10:24.401731 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 21:10:24.404860 systemd[1]: Stopped target paths.target - Path Units. Jan 16 21:10:24.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.405543 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 16 21:10:24.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.406571 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 21:10:24.407766 systemd[1]: Stopped target slices.target - Slice Units. Jan 16 21:10:24.409338 systemd[1]: Stopped target sockets.target - Socket Units. Jan 16 21:10:24.412455 systemd[1]: iscsid.socket: Deactivated successfully. Jan 16 21:10:24.412532 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 21:10:24.413405 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 16 21:10:24.413464 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 21:10:24.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.414314 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 16 21:10:24.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.414351 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 16 21:10:24.415784 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 16 21:10:24.415874 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 16 21:10:24.434000 audit: BPF prog-id=6 op=UNLOAD Jan 16 21:10:24.417065 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 16 21:10:24.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.417134 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 16 21:10:24.418708 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 16 21:10:24.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.420821 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 16 21:10:24.424943 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 16 21:10:24.426984 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 16 21:10:24.427097 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 16 21:10:24.442000 audit: BPF prog-id=9 op=UNLOAD Jan 16 21:10:24.429447 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 16 21:10:24.429574 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 16 21:10:24.434425 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 16 21:10:24.434855 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 16 21:10:24.437993 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 16 21:10:24.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.438230 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 16 21:10:24.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.443357 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 16 21:10:24.445071 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 16 21:10:24.445145 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 16 21:10:24.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.447647 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 16 21:10:24.450008 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 16 21:10:24.450101 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 21:10:24.453322 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 21:10:24.453437 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 21:10:24.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.455161 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 16 21:10:24.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.455261 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 16 21:10:24.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.456344 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 21:10:24.469212 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 16 21:10:24.469383 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 21:10:24.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.474952 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 16 21:10:24.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.475035 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 16 21:10:24.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.478353 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 16 21:10:24.478399 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 21:10:24.479880 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 16 21:10:24.479960 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 16 21:10:24.481901 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 16 21:10:24.481986 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 16 21:10:24.483392 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 21:10:24.483473 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 21:10:24.486166 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 16 21:10:24.488092 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 16 21:10:24.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.488193 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 21:10:24.489752 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 16 21:10:24.489830 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 21:10:24.493402 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 21:10:24.493506 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 21:10:24.510376 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 16 21:10:24.510598 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 16 21:10:24.521338 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 16 21:10:24.521517 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 16 21:10:24.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:24.524239 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 16 21:10:24.526866 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 16 21:10:24.550626 systemd[1]: Switching root. Jan 16 21:10:24.601352 systemd-journald[298]: Journal stopped Jan 16 21:10:26.201367 systemd-journald[298]: Received SIGTERM from PID 1 (systemd). Jan 16 21:10:26.201477 kernel: SELinux: policy capability network_peer_controls=1 Jan 16 21:10:26.201502 kernel: SELinux: policy capability open_perms=1 Jan 16 21:10:26.201531 kernel: SELinux: policy capability extended_socket_class=1 Jan 16 21:10:26.201554 kernel: SELinux: policy capability always_check_network=0 Jan 16 21:10:26.201615 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 16 21:10:26.201634 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 16 21:10:26.201660 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 16 21:10:26.201680 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 16 21:10:26.201699 kernel: SELinux: policy capability userspace_initial_context=0 Jan 16 21:10:26.201720 systemd[1]: Successfully loaded SELinux policy in 193.697ms. Jan 16 21:10:26.201758 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.504ms. Jan 16 21:10:26.201773 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 16 21:10:26.201787 systemd[1]: Detected virtualization kvm. Jan 16 21:10:26.201808 systemd[1]: Detected architecture x86-64. Jan 16 21:10:26.201822 systemd[1]: Detected first boot. Jan 16 21:10:26.201834 systemd[1]: Hostname set to . Jan 16 21:10:26.201846 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 16 21:10:26.201859 zram_generator::config[1118]: No configuration found. Jan 16 21:10:26.201878 kernel: Guest personality initialized and is inactive Jan 16 21:10:26.201892 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 16 21:10:26.201904 kernel: Initialized host personality Jan 16 21:10:26.201916 kernel: NET: Registered PF_VSOCK protocol family Jan 16 21:10:26.201930 systemd[1]: Populated /etc with preset unit settings. Jan 16 21:10:26.201943 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 16 21:10:26.201957 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 16 21:10:26.201970 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 16 21:10:26.201993 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 16 21:10:26.202006 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 16 21:10:26.202020 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 16 21:10:26.202032 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 16 21:10:26.202046 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 16 21:10:26.202059 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 16 21:10:26.202072 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 16 21:10:26.202087 systemd[1]: Created slice user.slice - User and Session Slice. Jan 16 21:10:26.202101 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 21:10:26.202114 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 21:10:26.202128 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 16 21:10:26.202141 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 16 21:10:26.202159 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 16 21:10:26.202187 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 21:10:26.202215 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 16 21:10:26.202238 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 21:10:26.202253 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 21:10:26.202266 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 16 21:10:26.202285 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 16 21:10:26.202314 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 16 21:10:26.202337 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 16 21:10:26.202354 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 21:10:26.202367 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 21:10:26.202380 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 16 21:10:26.202401 systemd[1]: Reached target slices.target - Slice Units. Jan 16 21:10:26.202423 systemd[1]: Reached target swap.target - Swaps. Jan 16 21:10:26.202447 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 16 21:10:26.202469 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 16 21:10:26.202492 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 16 21:10:26.202510 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 16 21:10:26.202523 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 16 21:10:26.202547 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 21:10:26.202568 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 16 21:10:26.204659 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 16 21:10:26.204685 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 21:10:26.204698 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 21:10:26.204711 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 16 21:10:26.204726 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 16 21:10:26.204739 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 16 21:10:26.204752 systemd[1]: Mounting media.mount - External Media Directory... Jan 16 21:10:26.204772 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 21:10:26.204785 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 16 21:10:26.204798 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 16 21:10:26.204810 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 16 21:10:26.204829 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 16 21:10:26.204843 systemd[1]: Reached target machines.target - Containers. Jan 16 21:10:26.204856 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 16 21:10:26.204874 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 21:10:26.204888 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 21:10:26.204901 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 16 21:10:26.204914 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 21:10:26.204927 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 21:10:26.204939 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 21:10:26.204952 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 16 21:10:26.204968 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 21:10:26.204982 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 16 21:10:26.204996 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 16 21:10:26.205009 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 16 21:10:26.205027 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 16 21:10:26.205043 systemd[1]: Stopped systemd-fsck-usr.service. Jan 16 21:10:26.205057 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 16 21:10:26.205072 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 21:10:26.205085 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 21:10:26.205101 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 21:10:26.205114 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 16 21:10:26.205127 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 16 21:10:26.205140 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 21:10:26.205153 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 21:10:26.205168 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 16 21:10:26.205182 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 16 21:10:26.205197 systemd[1]: Mounted media.mount - External Media Directory. Jan 16 21:10:26.205212 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 16 21:10:26.205225 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 16 21:10:26.205240 kernel: fuse: init (API version 7.41) Jan 16 21:10:26.205254 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 16 21:10:26.205274 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 21:10:26.205287 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 16 21:10:26.205300 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 16 21:10:26.205314 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 21:10:26.205327 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 21:10:26.205342 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 21:10:26.205355 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 21:10:26.205368 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 16 21:10:26.205381 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 16 21:10:26.205394 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 21:10:26.205407 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 21:10:26.205420 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 21:10:26.205435 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 21:10:26.205448 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 16 21:10:26.205461 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 16 21:10:26.205474 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 21:10:26.205490 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 16 21:10:26.205552 systemd-journald[1189]: Collecting audit messages is enabled. Jan 16 21:10:26.205618 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 16 21:10:26.205635 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 21:10:26.205650 systemd-journald[1189]: Journal started Jan 16 21:10:26.205676 systemd-journald[1189]: Runtime Journal (/run/log/journal/9c46a9f520b84325a3b14c5f90659696) is 4.8M, max 39.1M, 34.2M free. Jan 16 21:10:25.825000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 16 21:10:25.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.008000 audit: BPF prog-id=14 op=UNLOAD Jan 16 21:10:26.008000 audit: BPF prog-id=13 op=UNLOAD Jan 16 21:10:26.011000 audit: BPF prog-id=15 op=LOAD Jan 16 21:10:26.011000 audit: BPF prog-id=16 op=LOAD Jan 16 21:10:26.011000 audit: BPF prog-id=17 op=LOAD Jan 16 21:10:26.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.196000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 16 21:10:26.196000 audit[1189]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffe633e47f0 a2=4000 a3=0 items=0 ppid=1 pid=1189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:26.196000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 16 21:10:25.698320 systemd[1]: Queued start job for default target multi-user.target. Jan 16 21:10:25.723828 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 16 21:10:25.724558 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 16 21:10:26.211687 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 16 21:10:26.217641 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 21:10:26.223626 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 16 21:10:26.229676 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 16 21:10:26.234669 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 21:10:26.239622 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 16 21:10:26.245616 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 21:10:26.249613 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 21:10:26.263271 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 16 21:10:26.267619 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 21:10:26.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.274622 kernel: ACPI: bus type drm_connector registered Jan 16 21:10:26.295810 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 21:10:26.298065 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 21:10:26.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.307223 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 16 21:10:26.339624 kernel: loop1: detected capacity change from 0 to 224512 Jan 16 21:10:26.338666 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 16 21:10:26.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.341498 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 16 21:10:26.345223 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 16 21:10:26.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.361858 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 21:10:26.369821 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 21:10:26.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.373644 systemd-journald[1189]: Time spent on flushing to /var/log/journal/9c46a9f520b84325a3b14c5f90659696 is 29.416ms for 1148 entries. Jan 16 21:10:26.373644 systemd-journald[1189]: System Journal (/var/log/journal/9c46a9f520b84325a3b14c5f90659696) is 8M, max 163.5M, 155.5M free. Jan 16 21:10:26.414055 systemd-journald[1189]: Received client request to flush runtime journal. Jan 16 21:10:26.414123 kernel: loop2: detected capacity change from 0 to 8 Jan 16 21:10:26.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.386801 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 16 21:10:26.393246 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 16 21:10:26.416965 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 16 21:10:26.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.433712 kernel: loop3: detected capacity change from 0 to 111560 Jan 16 21:10:26.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.432270 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 16 21:10:26.464015 kernel: loop4: detected capacity change from 0 to 50784 Jan 16 21:10:26.465457 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 16 21:10:26.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.470000 audit: BPF prog-id=18 op=LOAD Jan 16 21:10:26.471000 audit: BPF prog-id=19 op=LOAD Jan 16 21:10:26.471000 audit: BPF prog-id=20 op=LOAD Jan 16 21:10:26.477959 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 16 21:10:26.480000 audit: BPF prog-id=21 op=LOAD Jan 16 21:10:26.483974 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 21:10:26.489835 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 21:10:26.497000 audit: BPF prog-id=22 op=LOAD Jan 16 21:10:26.497000 audit: BPF prog-id=23 op=LOAD Jan 16 21:10:26.497000 audit: BPF prog-id=24 op=LOAD Jan 16 21:10:26.500620 kernel: loop5: detected capacity change from 0 to 224512 Jan 16 21:10:26.507000 audit: BPF prog-id=25 op=LOAD Jan 16 21:10:26.507000 audit: BPF prog-id=26 op=LOAD Jan 16 21:10:26.507000 audit: BPF prog-id=27 op=LOAD Jan 16 21:10:26.503872 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 16 21:10:26.508954 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 16 21:10:26.524612 kernel: loop6: detected capacity change from 0 to 8 Jan 16 21:10:26.539619 kernel: loop7: detected capacity change from 0 to 111560 Jan 16 21:10:26.549093 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Jan 16 21:10:26.549118 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Jan 16 21:10:26.558768 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 21:10:26.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.570623 kernel: loop1: detected capacity change from 0 to 50784 Jan 16 21:10:26.577736 systemd-nsresourced[1266]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 16 21:10:26.580372 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 16 21:10:26.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:26.592952 (sd-merge)[1265]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-digitalocean.raw'. Jan 16 21:10:26.604646 (sd-merge)[1265]: Merged extensions into '/usr'. Jan 16 21:10:26.614559 systemd[1]: Reload requested from client PID 1216 ('systemd-sysext') (unit systemd-sysext.service)... Jan 16 21:10:26.614577 systemd[1]: Reloading... Jan 16 21:10:26.763618 zram_generator::config[1311]: No configuration found. Jan 16 21:10:26.852145 systemd-resolved[1263]: Positive Trust Anchors: Jan 16 21:10:26.854389 systemd-resolved[1263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 21:10:26.854829 systemd-resolved[1263]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 16 21:10:26.854944 systemd-resolved[1263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 21:10:26.865433 systemd-oomd[1262]: No swap; memory pressure usage will be degraded Jan 16 21:10:26.884341 systemd-resolved[1263]: Using system hostname 'ci-4580.0.0-p-735bf5553b'. Jan 16 21:10:27.108712 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 16 21:10:27.109129 systemd[1]: Reloading finished in 494 ms. Jan 16 21:10:27.126284 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 16 21:10:27.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:27.128351 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 16 21:10:27.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:27.130026 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 21:10:27.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:27.131774 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 16 21:10:27.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:27.138480 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 21:10:27.142109 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 16 21:10:27.151802 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 16 21:10:27.163851 systemd[1]: Starting ensure-sysext.service... Jan 16 21:10:27.172806 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 21:10:27.184000 audit: BPF prog-id=28 op=LOAD Jan 16 21:10:27.184000 audit: BPF prog-id=25 op=UNLOAD Jan 16 21:10:27.184000 audit: BPF prog-id=29 op=LOAD Jan 16 21:10:27.184000 audit: BPF prog-id=30 op=LOAD Jan 16 21:10:27.184000 audit: BPF prog-id=26 op=UNLOAD Jan 16 21:10:27.184000 audit: BPF prog-id=27 op=UNLOAD Jan 16 21:10:27.185000 audit: BPF prog-id=31 op=LOAD Jan 16 21:10:27.185000 audit: BPF prog-id=22 op=UNLOAD Jan 16 21:10:27.185000 audit: BPF prog-id=32 op=LOAD Jan 16 21:10:27.185000 audit: BPF prog-id=33 op=LOAD Jan 16 21:10:27.185000 audit: BPF prog-id=23 op=UNLOAD Jan 16 21:10:27.185000 audit: BPF prog-id=24 op=UNLOAD Jan 16 21:10:27.188000 audit: BPF prog-id=34 op=LOAD Jan 16 21:10:27.188000 audit: BPF prog-id=21 op=UNLOAD Jan 16 21:10:27.189000 audit: BPF prog-id=35 op=LOAD Jan 16 21:10:27.189000 audit: BPF prog-id=18 op=UNLOAD Jan 16 21:10:27.189000 audit: BPF prog-id=36 op=LOAD Jan 16 21:10:27.189000 audit: BPF prog-id=37 op=LOAD Jan 16 21:10:27.189000 audit: BPF prog-id=19 op=UNLOAD Jan 16 21:10:27.189000 audit: BPF prog-id=20 op=UNLOAD Jan 16 21:10:27.192000 audit: BPF prog-id=38 op=LOAD Jan 16 21:10:27.193000 audit: BPF prog-id=15 op=UNLOAD Jan 16 21:10:27.193000 audit: BPF prog-id=39 op=LOAD Jan 16 21:10:27.194000 audit: BPF prog-id=40 op=LOAD Jan 16 21:10:27.194000 audit: BPF prog-id=16 op=UNLOAD Jan 16 21:10:27.194000 audit: BPF prog-id=17 op=UNLOAD Jan 16 21:10:27.197939 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 16 21:10:27.198976 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 16 21:10:27.205911 systemd[1]: Reload requested from client PID 1355 ('systemctl') (unit ensure-sysext.service)... Jan 16 21:10:27.206032 systemd[1]: Reloading... Jan 16 21:10:27.254297 systemd-tmpfiles[1356]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 16 21:10:27.255711 systemd-tmpfiles[1356]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 16 21:10:27.257023 systemd-tmpfiles[1356]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 16 21:10:27.258840 systemd-tmpfiles[1356]: ACLs are not supported, ignoring. Jan 16 21:10:27.260276 systemd-tmpfiles[1356]: ACLs are not supported, ignoring. Jan 16 21:10:27.272890 systemd-tmpfiles[1356]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 21:10:27.273076 systemd-tmpfiles[1356]: Skipping /boot Jan 16 21:10:27.297653 systemd-tmpfiles[1356]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 21:10:27.298629 systemd-tmpfiles[1356]: Skipping /boot Jan 16 21:10:27.333621 zram_generator::config[1390]: No configuration found. Jan 16 21:10:27.546960 systemd[1]: Reloading finished in 338 ms. Jan 16 21:10:27.559089 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 16 21:10:27.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:27.561000 audit: BPF prog-id=41 op=LOAD Jan 16 21:10:27.561000 audit: BPF prog-id=34 op=UNLOAD Jan 16 21:10:27.562000 audit: BPF prog-id=42 op=LOAD Jan 16 21:10:27.562000 audit: BPF prog-id=35 op=UNLOAD Jan 16 21:10:27.562000 audit: BPF prog-id=43 op=LOAD Jan 16 21:10:27.562000 audit: BPF prog-id=44 op=LOAD Jan 16 21:10:27.562000 audit: BPF prog-id=36 op=UNLOAD Jan 16 21:10:27.562000 audit: BPF prog-id=37 op=UNLOAD Jan 16 21:10:27.563000 audit: BPF prog-id=45 op=LOAD Jan 16 21:10:27.563000 audit: BPF prog-id=28 op=UNLOAD Jan 16 21:10:27.563000 audit: BPF prog-id=46 op=LOAD Jan 16 21:10:27.563000 audit: BPF prog-id=47 op=LOAD Jan 16 21:10:27.563000 audit: BPF prog-id=29 op=UNLOAD Jan 16 21:10:27.563000 audit: BPF prog-id=30 op=UNLOAD Jan 16 21:10:27.564000 audit: BPF prog-id=48 op=LOAD Jan 16 21:10:27.564000 audit: BPF prog-id=38 op=UNLOAD Jan 16 21:10:27.564000 audit: BPF prog-id=49 op=LOAD Jan 16 21:10:27.564000 audit: BPF prog-id=50 op=LOAD Jan 16 21:10:27.564000 audit: BPF prog-id=39 op=UNLOAD Jan 16 21:10:27.564000 audit: BPF prog-id=40 op=UNLOAD Jan 16 21:10:27.565000 audit: BPF prog-id=51 op=LOAD Jan 16 21:10:27.565000 audit: BPF prog-id=31 op=UNLOAD Jan 16 21:10:27.565000 audit: BPF prog-id=52 op=LOAD Jan 16 21:10:27.566000 audit: BPF prog-id=53 op=LOAD Jan 16 21:10:27.566000 audit: BPF prog-id=32 op=UNLOAD Jan 16 21:10:27.566000 audit: BPF prog-id=33 op=UNLOAD Jan 16 21:10:27.571705 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 21:10:27.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:27.584255 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 16 21:10:27.586872 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 16 21:10:27.590892 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 16 21:10:27.596940 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 16 21:10:27.598000 audit: BPF prog-id=8 op=UNLOAD Jan 16 21:10:27.598000 audit: BPF prog-id=7 op=UNLOAD Jan 16 21:10:27.598000 audit: BPF prog-id=54 op=LOAD Jan 16 21:10:27.598000 audit: BPF prog-id=55 op=LOAD Jan 16 21:10:27.603855 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 21:10:27.609975 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 16 21:10:27.616443 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 21:10:27.616650 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 21:10:27.626183 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 21:10:27.639053 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 21:10:27.644051 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 21:10:27.645537 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 21:10:27.645809 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 16 21:10:27.645910 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 16 21:10:27.646007 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 21:10:27.663082 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 21:10:27.663269 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 21:10:27.663452 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 21:10:27.665225 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 16 21:10:27.665389 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 16 21:10:27.665484 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 21:10:27.672025 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 21:10:27.672285 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 21:10:27.675000 audit[1440]: SYSTEM_BOOT pid=1440 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 16 21:10:27.689549 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 21:10:27.690849 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 21:10:27.691065 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 16 21:10:27.691162 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 16 21:10:27.691306 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 21:10:27.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:27.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:27.702900 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 21:10:27.703157 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 21:10:27.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:27.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:27.713665 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 16 21:10:27.714931 systemd[1]: Finished ensure-sysext.service. Jan 16 21:10:27.721381 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 21:10:27.726081 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 21:10:27.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:27.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:27.730056 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 21:10:27.730912 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 21:10:27.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:27.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:27.735889 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 21:10:27.736270 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 21:10:27.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:27.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:27.746572 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 21:10:27.749241 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 21:10:27.755000 audit: BPF prog-id=56 op=LOAD Jan 16 21:10:27.757866 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 16 21:10:27.777692 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 16 21:10:27.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:27.792916 systemd-udevd[1439]: Using default interface naming scheme 'v257'. Jan 16 21:10:27.815000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 16 21:10:27.815000 audit[1474]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffff8879e70 a2=420 a3=0 items=0 ppid=1435 pid=1474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:27.815000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 16 21:10:27.816091 augenrules[1474]: No rules Jan 16 21:10:27.819127 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 21:10:27.819616 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 16 21:10:27.836983 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 16 21:10:27.838093 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 21:10:27.860466 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 16 21:10:27.861919 systemd[1]: Reached target time-set.target - System Time Set. Jan 16 21:10:27.870429 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 21:10:27.875909 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 21:10:28.061417 systemd-networkd[1484]: lo: Link UP Jan 16 21:10:28.061431 systemd-networkd[1484]: lo: Gained carrier Jan 16 21:10:28.065833 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 21:10:28.067650 systemd[1]: Reached target network.target - Network. Jan 16 21:10:28.083786 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 16 21:10:28.088357 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 16 21:10:28.097946 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Jan 16 21:10:28.105675 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 16 21:10:28.109602 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 21:10:28.109824 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 21:10:28.118098 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 21:10:28.133052 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 21:10:28.140455 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 21:10:28.142424 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 21:10:28.143667 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 16 21:10:28.143761 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 16 21:10:28.143813 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 21:10:28.143845 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 21:10:28.144467 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 16 21:10:28.182671 kernel: ISO 9660 Extensions: RRIP_1991A Jan 16 21:10:28.194624 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 16 21:10:28.248255 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 16 21:10:28.286132 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 21:10:28.288159 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 21:10:28.298106 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 21:10:28.302415 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 21:10:28.303841 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 21:10:28.306811 systemd-networkd[1484]: eth0: Configuring with /run/systemd/network/10-de:d1:2a:06:68:e2.network. Jan 16 21:10:28.308145 systemd-networkd[1484]: eth0: Link UP Jan 16 21:10:28.308494 systemd-networkd[1484]: eth0: Gained carrier Jan 16 21:10:28.314720 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 16 21:10:28.318159 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 21:10:28.319654 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 21:10:28.322721 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 21:10:28.375257 systemd-networkd[1484]: eth1: Configuring with /run/systemd/network/10-86:78:30:89:b6:50.network. Jan 16 21:10:28.377097 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 16 21:10:28.377326 systemd-networkd[1484]: eth1: Link UP Jan 16 21:10:28.377556 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 16 21:10:28.378005 systemd-networkd[1484]: eth1: Gained carrier Jan 16 21:10:28.386165 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 16 21:10:28.386565 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 16 21:10:28.392461 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 16 21:10:28.402614 kernel: mousedev: PS/2 mouse device common for all mice Jan 16 21:10:28.405802 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 16 21:10:28.471800 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 16 21:10:28.464573 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 16 21:10:28.488630 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 16 21:10:28.494815 kernel: ACPI: button: Power Button [PWRF] Jan 16 21:10:28.517643 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 16 21:10:28.568631 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 16 21:10:28.581636 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 16 21:10:28.640695 ldconfig[1437]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 16 21:10:28.650766 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 16 21:10:28.661515 kernel: Console: switching to colour dummy device 80x25 Jan 16 21:10:28.664979 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 16 21:10:28.682966 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 16 21:10:28.683085 kernel: [drm] features: -context_init Jan 16 21:10:28.697714 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 16 21:10:28.698177 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 21:10:28.698405 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 16 21:10:28.698712 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 16 21:10:28.699034 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 16 21:10:28.699305 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 16 21:10:28.700236 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 16 21:10:28.700481 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 16 21:10:28.700828 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 16 21:10:28.701823 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 16 21:10:28.701887 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 16 21:10:28.701915 systemd[1]: Reached target paths.target - Path Units. Jan 16 21:10:28.701968 systemd[1]: Reached target timers.target - Timer Units. Jan 16 21:10:28.703788 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 16 21:10:28.705416 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 16 21:10:28.713433 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 16 21:10:28.714928 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 16 21:10:28.715013 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 16 21:10:28.721079 kernel: [drm] number of scanouts: 1 Jan 16 21:10:28.721144 kernel: [drm] number of cap sets: 0 Jan 16 21:10:28.727204 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Jan 16 21:10:28.723841 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 16 21:10:28.724727 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 16 21:10:28.725660 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 16 21:10:28.726718 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 21:10:28.727546 systemd[1]: Reached target basic.target - Basic System. Jan 16 21:10:28.727706 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 16 21:10:28.727730 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 16 21:10:28.730930 systemd[1]: Starting containerd.service - containerd container runtime... Jan 16 21:10:28.734557 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 16 21:10:28.737369 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 16 21:10:28.737400 kernel: Console: switching to colour frame buffer device 128x48 Jan 16 21:10:28.745828 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 16 21:10:28.752995 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 16 21:10:28.756827 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 16 21:10:28.764175 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 16 21:10:28.771993 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 16 21:10:28.772207 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 16 21:10:28.779164 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 16 21:10:28.785013 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 16 21:10:28.792780 jq[1553]: false Jan 16 21:10:28.795021 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 16 21:10:28.801952 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 16 21:10:28.810897 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 16 21:10:28.829033 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 16 21:10:28.830547 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 16 21:10:28.831516 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 16 21:10:28.835866 systemd[1]: Starting update-engine.service - Update Engine... Jan 16 21:10:28.846841 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 16 21:10:28.859111 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 16 21:10:28.862096 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 16 21:10:28.862815 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 16 21:10:28.863279 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 16 21:10:28.864684 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 16 21:10:28.911099 jq[1563]: true Jan 16 21:10:28.946858 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Refreshing passwd entry cache Jan 16 21:10:28.944997 oslogin_cache_refresh[1555]: Refreshing passwd entry cache Jan 16 21:10:28.977677 jq[1580]: true Jan 16 21:10:28.978007 coreos-metadata[1550]: Jan 16 21:10:28.975 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 21:10:28.979080 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Failure getting users, quitting Jan 16 21:10:28.979080 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 16 21:10:28.979080 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Refreshing group entry cache Jan 16 21:10:28.978472 oslogin_cache_refresh[1555]: Failure getting users, quitting Jan 16 21:10:28.978494 oslogin_cache_refresh[1555]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 16 21:10:28.978667 oslogin_cache_refresh[1555]: Refreshing group entry cache Jan 16 21:10:28.980340 dbus-daemon[1551]: [system] SELinux support is enabled Jan 16 21:10:28.980752 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 16 21:10:28.989637 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Failure getting groups, quitting Jan 16 21:10:28.989637 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 16 21:10:28.983834 oslogin_cache_refresh[1555]: Failure getting groups, quitting Jan 16 21:10:28.987352 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 16 21:10:28.983850 oslogin_cache_refresh[1555]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 16 21:10:28.987777 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 16 21:10:28.994510 extend-filesystems[1554]: Found /dev/vda6 Jan 16 21:10:28.995934 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 16 21:10:28.998227 coreos-metadata[1550]: Jan 16 21:10:28.996 INFO Fetch successful Jan 16 21:10:28.995990 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 16 21:10:28.997810 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 16 21:10:28.997932 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 16 21:10:28.998224 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 16 21:10:29.004320 update_engine[1562]: I20260116 21:10:29.003608 1562 main.cc:92] Flatcar Update Engine starting Jan 16 21:10:29.019365 systemd[1]: Started update-engine.service - Update Engine. Jan 16 21:10:29.023325 update_engine[1562]: I20260116 21:10:29.020249 1562 update_check_scheduler.cc:74] Next update check in 8m0s Jan 16 21:10:29.026767 extend-filesystems[1554]: Found /dev/vda9 Jan 16 21:10:29.044498 extend-filesystems[1554]: Checking size of /dev/vda9 Jan 16 21:10:29.079726 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 16 21:10:29.082037 extend-filesystems[1554]: Resized partition /dev/vda9 Jan 16 21:10:29.086903 systemd[1]: motdgen.service: Deactivated successfully. Jan 16 21:10:29.091296 extend-filesystems[1609]: resize2fs 1.47.3 (8-Jul-2025) Jan 16 21:10:29.087284 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 16 21:10:29.098379 tar[1582]: linux-amd64/LICENSE Jan 16 21:10:29.098379 tar[1582]: linux-amd64/helm Jan 16 21:10:29.106118 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 21:10:29.118699 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 14138363 blocks Jan 16 21:10:29.262527 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 16 21:10:29.267492 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 16 21:10:29.294726 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 21:10:29.298524 bash[1626]: Updated "/home/core/.ssh/authorized_keys" Jan 16 21:10:29.339559 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 16 21:10:29.374086 systemd[1]: Starting sshkeys.service... Jan 16 21:10:29.418347 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 21:10:29.419125 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 21:10:29.422262 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 21:10:29.440422 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 21:10:29.516352 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Jan 16 21:10:29.570528 extend-filesystems[1609]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 16 21:10:29.570528 extend-filesystems[1609]: old_desc_blocks = 1, new_desc_blocks = 7 Jan 16 21:10:29.570528 extend-filesystems[1609]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Jan 16 21:10:29.581971 extend-filesystems[1554]: Resized filesystem in /dev/vda9 Jan 16 21:10:29.570863 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 16 21:10:29.578330 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 16 21:10:29.583117 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 16 21:10:29.583506 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 16 21:10:29.643752 systemd-logind[1560]: New seat seat0. Jan 16 21:10:29.653023 systemd-logind[1560]: Watching system buttons on /dev/input/event2 (Power Button) Jan 16 21:10:29.654041 systemd-logind[1560]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 16 21:10:29.654465 systemd[1]: Started systemd-logind.service - User Login Management. Jan 16 21:10:29.704676 systemd-networkd[1484]: eth1: Gained IPv6LL Jan 16 21:10:29.706255 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 16 21:10:29.746838 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 21:10:29.764622 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 16 21:10:29.768451 systemd[1]: Reached target network-online.target - Network is Online. Jan 16 21:10:29.773871 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:10:29.780966 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 16 21:10:29.808476 coreos-metadata[1648]: Jan 16 21:10:29.808 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 21:10:29.818616 locksmithd[1598]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 16 21:10:29.836962 coreos-metadata[1648]: Jan 16 21:10:29.829 INFO Fetch successful Jan 16 21:10:29.838502 unknown[1648]: wrote ssh authorized keys file for user: core Jan 16 21:10:29.903745 systemd-networkd[1484]: eth0: Gained IPv6LL Jan 16 21:10:29.904273 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 16 21:10:29.944781 update-ssh-keys[1660]: Updated "/home/core/.ssh/authorized_keys" Jan 16 21:10:29.945707 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 16 21:10:29.953740 systemd[1]: Finished sshkeys.service. Jan 16 21:10:29.991082 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 16 21:10:30.069151 containerd[1584]: time="2026-01-16T21:10:30Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 16 21:10:30.071741 containerd[1584]: time="2026-01-16T21:10:30.071700082Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 16 21:10:30.131476 containerd[1584]: time="2026-01-16T21:10:30.131330208Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t=1.742153ms Jan 16 21:10:30.133740 containerd[1584]: time="2026-01-16T21:10:30.133672461Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 16 21:10:30.134605 containerd[1584]: time="2026-01-16T21:10:30.134514643Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 16 21:10:30.135214 containerd[1584]: time="2026-01-16T21:10:30.135172575Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 16 21:10:30.138619 containerd[1584]: time="2026-01-16T21:10:30.137880479Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 16 21:10:30.138619 containerd[1584]: time="2026-01-16T21:10:30.137934476Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 16 21:10:30.138619 containerd[1584]: time="2026-01-16T21:10:30.138020954Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 16 21:10:30.138619 containerd[1584]: time="2026-01-16T21:10:30.138039512Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 16 21:10:30.138619 containerd[1584]: time="2026-01-16T21:10:30.138336659Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 16 21:10:30.138619 containerd[1584]: time="2026-01-16T21:10:30.138361142Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 16 21:10:30.138619 containerd[1584]: time="2026-01-16T21:10:30.138384454Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 16 21:10:30.138619 containerd[1584]: time="2026-01-16T21:10:30.138396943Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 16 21:10:30.141622 containerd[1584]: time="2026-01-16T21:10:30.140568267Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 16 21:10:30.141622 containerd[1584]: time="2026-01-16T21:10:30.140686744Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 16 21:10:30.141622 containerd[1584]: time="2026-01-16T21:10:30.140867762Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 16 21:10:30.141622 containerd[1584]: time="2026-01-16T21:10:30.141141371Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 16 21:10:30.141622 containerd[1584]: time="2026-01-16T21:10:30.141175025Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 16 21:10:30.141622 containerd[1584]: time="2026-01-16T21:10:30.141186088Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 16 21:10:30.141622 containerd[1584]: time="2026-01-16T21:10:30.141243005Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 16 21:10:30.144931 containerd[1584]: time="2026-01-16T21:10:30.144890644Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 16 21:10:30.145269 containerd[1584]: time="2026-01-16T21:10:30.145212309Z" level=info msg="metadata content store policy set" policy=shared Jan 16 21:10:30.156536 containerd[1584]: time="2026-01-16T21:10:30.156482181Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 16 21:10:30.157327 containerd[1584]: time="2026-01-16T21:10:30.156726074Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 16 21:10:30.157327 containerd[1584]: time="2026-01-16T21:10:30.156851672Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 16 21:10:30.157327 containerd[1584]: time="2026-01-16T21:10:30.156868395Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 16 21:10:30.157327 containerd[1584]: time="2026-01-16T21:10:30.156881310Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 16 21:10:30.157327 containerd[1584]: time="2026-01-16T21:10:30.156892744Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 16 21:10:30.157327 containerd[1584]: time="2026-01-16T21:10:30.156921904Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 16 21:10:30.157327 containerd[1584]: time="2026-01-16T21:10:30.156932630Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 16 21:10:30.157327 containerd[1584]: time="2026-01-16T21:10:30.156944196Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 16 21:10:30.157327 containerd[1584]: time="2026-01-16T21:10:30.156957901Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 16 21:10:30.157327 containerd[1584]: time="2026-01-16T21:10:30.156969889Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 16 21:10:30.157327 containerd[1584]: time="2026-01-16T21:10:30.156994596Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 16 21:10:30.157327 containerd[1584]: time="2026-01-16T21:10:30.157005781Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 16 21:10:30.157327 containerd[1584]: time="2026-01-16T21:10:30.157018383Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 16 21:10:30.158965 containerd[1584]: time="2026-01-16T21:10:30.158883633Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 16 21:10:30.158965 containerd[1584]: time="2026-01-16T21:10:30.158962470Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 16 21:10:30.159072 containerd[1584]: time="2026-01-16T21:10:30.158991473Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 16 21:10:30.159072 containerd[1584]: time="2026-01-16T21:10:30.159011871Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 16 21:10:30.159072 containerd[1584]: time="2026-01-16T21:10:30.159032804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 16 21:10:30.159072 containerd[1584]: time="2026-01-16T21:10:30.159049329Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 16 21:10:30.159072 containerd[1584]: time="2026-01-16T21:10:30.159068707Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 16 21:10:30.159201 containerd[1584]: time="2026-01-16T21:10:30.159084311Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 16 21:10:30.159201 containerd[1584]: time="2026-01-16T21:10:30.159099474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 16 21:10:30.159201 containerd[1584]: time="2026-01-16T21:10:30.159113513Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 16 21:10:30.159201 containerd[1584]: time="2026-01-16T21:10:30.159127782Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 16 21:10:30.159201 containerd[1584]: time="2026-01-16T21:10:30.159167716Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 16 21:10:30.159305 containerd[1584]: time="2026-01-16T21:10:30.159234959Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 16 21:10:30.159305 containerd[1584]: time="2026-01-16T21:10:30.159254908Z" level=info msg="Start snapshots syncer" Jan 16 21:10:30.159305 containerd[1584]: time="2026-01-16T21:10:30.159281649Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 16 21:10:30.160065 containerd[1584]: time="2026-01-16T21:10:30.159936330Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 16 21:10:30.160812 containerd[1584]: time="2026-01-16T21:10:30.160071054Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 16 21:10:30.160812 containerd[1584]: time="2026-01-16T21:10:30.160188145Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 16 21:10:30.160812 containerd[1584]: time="2026-01-16T21:10:30.160390122Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 16 21:10:30.160812 containerd[1584]: time="2026-01-16T21:10:30.160423324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 16 21:10:30.160812 containerd[1584]: time="2026-01-16T21:10:30.160443701Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 16 21:10:30.160812 containerd[1584]: time="2026-01-16T21:10:30.160460510Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 16 21:10:30.160812 containerd[1584]: time="2026-01-16T21:10:30.160479283Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 16 21:10:30.160812 containerd[1584]: time="2026-01-16T21:10:30.160495241Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 16 21:10:30.160812 containerd[1584]: time="2026-01-16T21:10:30.160510985Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 16 21:10:30.160812 containerd[1584]: time="2026-01-16T21:10:30.160526145Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 16 21:10:30.160812 containerd[1584]: time="2026-01-16T21:10:30.160544070Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 16 21:10:30.164536 containerd[1584]: time="2026-01-16T21:10:30.164188068Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 16 21:10:30.164536 containerd[1584]: time="2026-01-16T21:10:30.164271308Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 16 21:10:30.164536 containerd[1584]: time="2026-01-16T21:10:30.164290312Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 16 21:10:30.164536 containerd[1584]: time="2026-01-16T21:10:30.164306913Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 16 21:10:30.164536 containerd[1584]: time="2026-01-16T21:10:30.164323081Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 16 21:10:30.164536 containerd[1584]: time="2026-01-16T21:10:30.164363690Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 16 21:10:30.164536 containerd[1584]: time="2026-01-16T21:10:30.164383795Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 16 21:10:30.164536 containerd[1584]: time="2026-01-16T21:10:30.164408583Z" level=info msg="runtime interface created" Jan 16 21:10:30.164536 containerd[1584]: time="2026-01-16T21:10:30.164417393Z" level=info msg="created NRI interface" Jan 16 21:10:30.164536 containerd[1584]: time="2026-01-16T21:10:30.164431979Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 16 21:10:30.164536 containerd[1584]: time="2026-01-16T21:10:30.164456948Z" level=info msg="Connect containerd service" Jan 16 21:10:30.164536 containerd[1584]: time="2026-01-16T21:10:30.164500730Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 16 21:10:30.171618 containerd[1584]: time="2026-01-16T21:10:30.168794760Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 21:10:30.209618 kernel: EDAC MC: Ver: 3.0.0 Jan 16 21:10:30.523394 containerd[1584]: time="2026-01-16T21:10:30.523271424Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 16 21:10:30.523394 containerd[1584]: time="2026-01-16T21:10:30.523355274Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 16 21:10:30.523394 containerd[1584]: time="2026-01-16T21:10:30.523388893Z" level=info msg="Start subscribing containerd event" Jan 16 21:10:30.523554 containerd[1584]: time="2026-01-16T21:10:30.523426403Z" level=info msg="Start recovering state" Jan 16 21:10:30.523716 containerd[1584]: time="2026-01-16T21:10:30.523602114Z" level=info msg="Start event monitor" Jan 16 21:10:30.523716 containerd[1584]: time="2026-01-16T21:10:30.523655482Z" level=info msg="Start cni network conf syncer for default" Jan 16 21:10:30.523716 containerd[1584]: time="2026-01-16T21:10:30.523673951Z" level=info msg="Start streaming server" Jan 16 21:10:30.523716 containerd[1584]: time="2026-01-16T21:10:30.523687444Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 16 21:10:30.523716 containerd[1584]: time="2026-01-16T21:10:30.523697196Z" level=info msg="runtime interface starting up..." Jan 16 21:10:30.523716 containerd[1584]: time="2026-01-16T21:10:30.523706495Z" level=info msg="starting plugins..." Jan 16 21:10:30.524939 containerd[1584]: time="2026-01-16T21:10:30.523727356Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 16 21:10:30.524164 systemd[1]: Started containerd.service - containerd container runtime. Jan 16 21:10:30.529401 containerd[1584]: time="2026-01-16T21:10:30.527938510Z" level=info msg="containerd successfully booted in 0.459313s" Jan 16 21:10:30.653370 sshd_keygen[1574]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 16 21:10:30.695849 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 16 21:10:30.704910 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 16 21:10:30.754331 systemd[1]: issuegen.service: Deactivated successfully. Jan 16 21:10:30.754684 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 16 21:10:30.760108 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 16 21:10:30.805205 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 16 21:10:30.813372 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 16 21:10:30.821813 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 16 21:10:30.824353 systemd[1]: Reached target getty.target - Login Prompts. Jan 16 21:10:30.890831 tar[1582]: linux-amd64/README.md Jan 16 21:10:30.915984 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 16 21:10:31.544671 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:10:31.550446 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 16 21:10:31.552100 systemd[1]: Startup finished in 3.553s (kernel) + 16.195s (initrd) + 6.744s (userspace) = 26.493s. Jan 16 21:10:31.566041 (kubelet)[1712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 21:10:32.270071 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 16 21:10:32.272444 systemd[1]: Started sshd@0-137.184.190.135:22-68.220.241.50:50080.service - OpenSSH per-connection server daemon (68.220.241.50:50080). Jan 16 21:10:32.301793 kubelet[1712]: E0116 21:10:32.301712 1712 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 21:10:32.305740 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 21:10:32.305914 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 21:10:32.306847 systemd[1]: kubelet.service: Consumed 1.550s CPU time, 266.5M memory peak. Jan 16 21:10:32.747547 sshd[1723]: Accepted publickey for core from 68.220.241.50 port 50080 ssh2: RSA SHA256:0TIfaCMFjZ+DZLKyAY8AqXCIwfcgirSh3KulVUQk9aI Jan 16 21:10:32.750545 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:10:32.765192 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 16 21:10:32.766934 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 16 21:10:32.771887 systemd-logind[1560]: New session 1 of user core. Jan 16 21:10:32.801012 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 16 21:10:32.805987 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 16 21:10:32.828116 (systemd)[1730]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:10:32.833030 systemd-logind[1560]: New session 2 of user core. Jan 16 21:10:33.011516 systemd[1730]: Queued start job for default target default.target. Jan 16 21:10:33.019340 systemd[1730]: Created slice app.slice - User Application Slice. Jan 16 21:10:33.019396 systemd[1730]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 16 21:10:33.019420 systemd[1730]: Reached target paths.target - Paths. Jan 16 21:10:33.019708 systemd[1730]: Reached target timers.target - Timers. Jan 16 21:10:33.022119 systemd[1730]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 16 21:10:33.025798 systemd[1730]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 16 21:10:33.044830 systemd[1730]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 16 21:10:33.044951 systemd[1730]: Reached target sockets.target - Sockets. Jan 16 21:10:33.050141 systemd[1730]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 16 21:10:33.050342 systemd[1730]: Reached target basic.target - Basic System. Jan 16 21:10:33.050429 systemd[1730]: Reached target default.target - Main User Target. Jan 16 21:10:33.050496 systemd[1730]: Startup finished in 206ms. Jan 16 21:10:33.050773 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 16 21:10:33.064004 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 16 21:10:33.289355 systemd[1]: Started sshd@1-137.184.190.135:22-68.220.241.50:50082.service - OpenSSH per-connection server daemon (68.220.241.50:50082). Jan 16 21:10:33.695497 sshd[1744]: Accepted publickey for core from 68.220.241.50 port 50082 ssh2: RSA SHA256:0TIfaCMFjZ+DZLKyAY8AqXCIwfcgirSh3KulVUQk9aI Jan 16 21:10:33.697649 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:10:33.704504 systemd-logind[1560]: New session 3 of user core. Jan 16 21:10:33.715028 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 16 21:10:33.898790 sshd[1748]: Connection closed by 68.220.241.50 port 50082 Jan 16 21:10:33.899614 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Jan 16 21:10:33.905559 systemd[1]: sshd@1-137.184.190.135:22-68.220.241.50:50082.service: Deactivated successfully. Jan 16 21:10:33.907930 systemd[1]: session-3.scope: Deactivated successfully. Jan 16 21:10:33.909008 systemd-logind[1560]: Session 3 logged out. Waiting for processes to exit. Jan 16 21:10:33.911025 systemd-logind[1560]: Removed session 3. Jan 16 21:10:33.981135 systemd[1]: Started sshd@2-137.184.190.135:22-68.220.241.50:50096.service - OpenSSH per-connection server daemon (68.220.241.50:50096). Jan 16 21:10:34.383567 sshd[1754]: Accepted publickey for core from 68.220.241.50 port 50096 ssh2: RSA SHA256:0TIfaCMFjZ+DZLKyAY8AqXCIwfcgirSh3KulVUQk9aI Jan 16 21:10:34.385245 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:10:34.391057 systemd-logind[1560]: New session 4 of user core. Jan 16 21:10:34.399313 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 16 21:10:34.605438 sshd[1758]: Connection closed by 68.220.241.50 port 50096 Jan 16 21:10:34.606164 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Jan 16 21:10:34.614216 systemd[1]: sshd@2-137.184.190.135:22-68.220.241.50:50096.service: Deactivated successfully. Jan 16 21:10:34.618472 systemd[1]: session-4.scope: Deactivated successfully. Jan 16 21:10:34.621291 systemd-logind[1560]: Session 4 logged out. Waiting for processes to exit. Jan 16 21:10:34.623131 systemd-logind[1560]: Removed session 4. Jan 16 21:10:34.690896 systemd[1]: Started sshd@3-137.184.190.135:22-68.220.241.50:50104.service - OpenSSH per-connection server daemon (68.220.241.50:50104). Jan 16 21:10:35.103470 sshd[1764]: Accepted publickey for core from 68.220.241.50 port 50104 ssh2: RSA SHA256:0TIfaCMFjZ+DZLKyAY8AqXCIwfcgirSh3KulVUQk9aI Jan 16 21:10:35.105524 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:10:35.113500 systemd-logind[1560]: New session 5 of user core. Jan 16 21:10:35.121999 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 16 21:10:35.319104 sshd[1768]: Connection closed by 68.220.241.50 port 50104 Jan 16 21:10:35.319853 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Jan 16 21:10:35.325461 systemd[1]: sshd@3-137.184.190.135:22-68.220.241.50:50104.service: Deactivated successfully. Jan 16 21:10:35.328043 systemd[1]: session-5.scope: Deactivated successfully. Jan 16 21:10:35.329724 systemd-logind[1560]: Session 5 logged out. Waiting for processes to exit. Jan 16 21:10:35.333011 systemd-logind[1560]: Removed session 5. Jan 16 21:10:35.389247 systemd[1]: Started sshd@4-137.184.190.135:22-68.220.241.50:50116.service - OpenSSH per-connection server daemon (68.220.241.50:50116). Jan 16 21:10:35.761010 sshd[1774]: Accepted publickey for core from 68.220.241.50 port 50116 ssh2: RSA SHA256:0TIfaCMFjZ+DZLKyAY8AqXCIwfcgirSh3KulVUQk9aI Jan 16 21:10:35.762682 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:10:35.768988 systemd-logind[1560]: New session 6 of user core. Jan 16 21:10:35.783959 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 16 21:10:35.907312 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 16 21:10:35.907822 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 21:10:35.918105 sudo[1779]: pam_unix(sudo:session): session closed for user root Jan 16 21:10:35.979009 sshd[1778]: Connection closed by 68.220.241.50 port 50116 Jan 16 21:10:35.979889 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Jan 16 21:10:35.984865 systemd[1]: sshd@4-137.184.190.135:22-68.220.241.50:50116.service: Deactivated successfully. Jan 16 21:10:35.987262 systemd[1]: session-6.scope: Deactivated successfully. Jan 16 21:10:35.988849 systemd-logind[1560]: Session 6 logged out. Waiting for processes to exit. Jan 16 21:10:35.991692 systemd-logind[1560]: Removed session 6. Jan 16 21:10:36.060915 systemd[1]: Started sshd@5-137.184.190.135:22-68.220.241.50:50130.service - OpenSSH per-connection server daemon (68.220.241.50:50130). Jan 16 21:10:36.443495 sshd[1786]: Accepted publickey for core from 68.220.241.50 port 50130 ssh2: RSA SHA256:0TIfaCMFjZ+DZLKyAY8AqXCIwfcgirSh3KulVUQk9aI Jan 16 21:10:36.445532 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:10:36.451989 systemd-logind[1560]: New session 7 of user core. Jan 16 21:10:36.459918 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 16 21:10:36.584113 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 16 21:10:36.584464 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 21:10:36.589538 sudo[1792]: pam_unix(sudo:session): session closed for user root Jan 16 21:10:36.599961 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 16 21:10:36.600423 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 21:10:36.612190 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 16 21:10:36.677660 kernel: kauditd_printk_skb: 177 callbacks suppressed Jan 16 21:10:36.677761 kernel: audit: type=1305 audit(1768597836.672:225): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 16 21:10:36.677794 kernel: audit: type=1300 audit(1768597836.672:225): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc8bb9fd60 a2=420 a3=0 items=0 ppid=1797 pid=1816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:36.672000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 16 21:10:36.672000 audit[1816]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc8bb9fd60 a2=420 a3=0 items=0 ppid=1797 pid=1816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:36.677520 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 21:10:36.678108 augenrules[1816]: No rules Jan 16 21:10:36.678798 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 16 21:10:36.672000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 16 21:10:36.685731 kernel: audit: type=1327 audit(1768597836.672:225): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 16 21:10:36.685780 kernel: audit: type=1130 audit(1768597836.678:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:36.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:36.683715 sudo[1791]: pam_unix(sudo:session): session closed for user root Jan 16 21:10:36.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:36.683000 audit[1791]: USER_END pid=1791 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:10:36.697872 kernel: audit: type=1131 audit(1768597836.678:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:36.698014 kernel: audit: type=1106 audit(1768597836.683:228): pid=1791 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:10:36.683000 audit[1791]: CRED_DISP pid=1791 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:10:36.701483 kernel: audit: type=1104 audit(1768597836.683:229): pid=1791 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:10:36.751192 sshd[1790]: Connection closed by 68.220.241.50 port 50130 Jan 16 21:10:36.751649 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Jan 16 21:10:36.752000 audit[1786]: USER_END pid=1786 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:10:36.752000 audit[1786]: CRED_DISP pid=1786 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:10:36.763198 kernel: audit: type=1106 audit(1768597836.752:230): pid=1786 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:10:36.763302 kernel: audit: type=1104 audit(1768597836.752:231): pid=1786 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:10:36.763872 systemd[1]: sshd@5-137.184.190.135:22-68.220.241.50:50130.service: Deactivated successfully. Jan 16 21:10:36.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-137.184.190.135:22-68.220.241.50:50130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:36.767031 systemd[1]: session-7.scope: Deactivated successfully. Jan 16 21:10:36.769227 systemd-logind[1560]: Session 7 logged out. Waiting for processes to exit. Jan 16 21:10:36.769604 kernel: audit: type=1131 audit(1768597836.763:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-137.184.190.135:22-68.220.241.50:50130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:36.771949 systemd-logind[1560]: Removed session 7. Jan 16 21:10:36.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-137.184.190.135:22-68.220.241.50:50134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:36.828768 systemd[1]: Started sshd@6-137.184.190.135:22-68.220.241.50:50134.service - OpenSSH per-connection server daemon (68.220.241.50:50134). Jan 16 21:10:37.208000 audit[1825]: USER_ACCT pid=1825 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:10:37.209851 sshd[1825]: Accepted publickey for core from 68.220.241.50 port 50134 ssh2: RSA SHA256:0TIfaCMFjZ+DZLKyAY8AqXCIwfcgirSh3KulVUQk9aI Jan 16 21:10:37.210000 audit[1825]: CRED_ACQ pid=1825 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:10:37.210000 audit[1825]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc63cf4690 a2=3 a3=0 items=0 ppid=1 pid=1825 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:37.210000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:10:37.211854 sshd-session[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:10:37.220316 systemd-logind[1560]: New session 8 of user core. Jan 16 21:10:37.230972 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 16 21:10:37.234000 audit[1825]: USER_START pid=1825 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:10:37.237000 audit[1829]: CRED_ACQ pid=1829 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:10:37.348000 audit[1830]: USER_ACCT pid=1830 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:10:37.349504 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 16 21:10:37.349000 audit[1830]: CRED_REFR pid=1830 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:10:37.349949 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 21:10:37.349000 audit[1830]: USER_START pid=1830 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:10:37.993983 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 16 21:10:38.007531 (dockerd)[1850]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 16 21:10:38.602402 dockerd[1850]: time="2026-01-16T21:10:38.602315302Z" level=info msg="Starting up" Jan 16 21:10:38.605044 dockerd[1850]: time="2026-01-16T21:10:38.604978004Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 16 21:10:38.628356 dockerd[1850]: time="2026-01-16T21:10:38.628239815Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 16 21:10:38.657672 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3613480013-merged.mount: Deactivated successfully. Jan 16 21:10:38.702874 dockerd[1850]: time="2026-01-16T21:10:38.702815122Z" level=info msg="Loading containers: start." Jan 16 21:10:38.717704 kernel: Initializing XFRM netlink socket Jan 16 21:10:38.799000 audit[1899]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1899 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:10:38.799000 audit[1899]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffcddd56d10 a2=0 a3=0 items=0 ppid=1850 pid=1899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.799000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 16 21:10:38.803000 audit[1901]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:10:38.803000 audit[1901]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff0e5be2a0 a2=0 a3=0 items=0 ppid=1850 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.803000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 16 21:10:38.807000 audit[1903]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:10:38.807000 audit[1903]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc0b4e1060 a2=0 a3=0 items=0 ppid=1850 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.807000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 16 21:10:38.810000 audit[1905]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1905 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:10:38.810000 audit[1905]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2a333d80 a2=0 a3=0 items=0 ppid=1850 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.810000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 16 21:10:38.814000 audit[1907]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:10:38.814000 audit[1907]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdd34b5850 a2=0 a3=0 items=0 ppid=1850 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.814000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 16 21:10:38.817000 audit[1909]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:10:38.817000 audit[1909]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdc9996f50 a2=0 a3=0 items=0 ppid=1850 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.817000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 21:10:38.821000 audit[1911]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:10:38.821000 audit[1911]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd39920100 a2=0 a3=0 items=0 ppid=1850 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.821000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 16 21:10:38.826000 audit[1913]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:10:38.826000 audit[1913]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff37f78f00 a2=0 a3=0 items=0 ppid=1850 pid=1913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.826000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 16 21:10:38.867000 audit[1916]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1916 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:10:38.867000 audit[1916]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffc8f3de880 a2=0 a3=0 items=0 ppid=1850 pid=1916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.867000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 16 21:10:38.870000 audit[1918]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:10:38.870000 audit[1918]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe70a0e240 a2=0 a3=0 items=0 ppid=1850 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.870000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 16 21:10:38.875000 audit[1920]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:10:38.875000 audit[1920]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd609af2c0 a2=0 a3=0 items=0 ppid=1850 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.875000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 16 21:10:38.878000 audit[1922]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:10:38.878000 audit[1922]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffee120b9d0 a2=0 a3=0 items=0 ppid=1850 pid=1922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.878000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 21:10:38.882000 audit[1924]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:10:38.882000 audit[1924]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fffa42e85e0 a2=0 a3=0 items=0 ppid=1850 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.882000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 16 21:10:38.939000 audit[1954]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1954 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:10:38.939000 audit[1954]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffedf1e4700 a2=0 a3=0 items=0 ppid=1850 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.939000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 16 21:10:38.942000 audit[1956]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:10:38.942000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe0bfbaf70 a2=0 a3=0 items=0 ppid=1850 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.942000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 16 21:10:38.946000 audit[1958]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1958 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:10:38.946000 audit[1958]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd0d36690 a2=0 a3=0 items=0 ppid=1850 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.946000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 16 21:10:38.949000 audit[1960]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1960 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:10:38.949000 audit[1960]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8c1025e0 a2=0 a3=0 items=0 ppid=1850 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.949000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 16 21:10:38.953000 audit[1962]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:10:38.953000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe2eaa7a90 a2=0 a3=0 items=0 ppid=1850 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.953000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 16 21:10:38.956000 audit[1964]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:10:38.956000 audit[1964]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff5d8b36a0 a2=0 a3=0 items=0 ppid=1850 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.956000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 21:10:38.960000 audit[1966]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:10:38.960000 audit[1966]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe0a406760 a2=0 a3=0 items=0 ppid=1850 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.960000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 16 21:10:38.963000 audit[1968]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:10:38.963000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffdd5270810 a2=0 a3=0 items=0 ppid=1850 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.963000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 16 21:10:38.966000 audit[1970]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:10:38.966000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffd4797ec00 a2=0 a3=0 items=0 ppid=1850 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.966000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 16 21:10:38.969000 audit[1972]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:10:38.969000 audit[1972]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff9efe4cf0 a2=0 a3=0 items=0 ppid=1850 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.969000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 16 21:10:38.973000 audit[1974]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:10:38.973000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe72307e40 a2=0 a3=0 items=0 ppid=1850 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.973000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 16 21:10:38.976000 audit[1976]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:10:38.976000 audit[1976]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffff4afcee0 a2=0 a3=0 items=0 ppid=1850 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.976000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 21:10:38.979000 audit[1978]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:10:38.979000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffeed24a040 a2=0 a3=0 items=0 ppid=1850 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.979000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 16 21:10:38.987000 audit[1983]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1983 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:10:38.987000 audit[1983]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff2672a5d0 a2=0 a3=0 items=0 ppid=1850 pid=1983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.987000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 16 21:10:38.990000 audit[1985]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1985 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:10:38.990000 audit[1985]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffee9479320 a2=0 a3=0 items=0 ppid=1850 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.990000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 16 21:10:38.994000 audit[1987]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1987 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:10:38.994000 audit[1987]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff26946110 a2=0 a3=0 items=0 ppid=1850 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.994000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 16 21:10:38.997000 audit[1989]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1989 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:10:38.997000 audit[1989]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff38d185c0 a2=0 a3=0 items=0 ppid=1850 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:38.997000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 16 21:10:39.000000 audit[1991]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:10:39.000000 audit[1991]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffdce0e310 a2=0 a3=0 items=0 ppid=1850 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:39.000000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 16 21:10:39.003000 audit[1993]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=1993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:10:39.003000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcc5a01b20 a2=0 a3=0 items=0 ppid=1850 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:39.003000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 16 21:10:39.014333 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 16 21:10:39.038000 audit[1997]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=1997 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:10:39.038000 audit[1997]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffdccd4a2d0 a2=0 a3=0 items=0 ppid=1850 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:39.038000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 16 21:10:39.046000 audit[2001]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2001 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:10:39.046000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe2c1c1050 a2=0 a3=0 items=0 ppid=1850 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:39.046000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 16 21:10:39.060000 audit[2009]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:10:39.060000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffd16732050 a2=0 a3=0 items=0 ppid=1850 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:39.060000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 16 21:10:39.073000 audit[2015]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:10:39.073000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffea57295d0 a2=0 a3=0 items=0 ppid=1850 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:39.073000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 16 21:10:39.077000 audit[2017]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:10:39.077000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffc59193bd0 a2=0 a3=0 items=0 ppid=1850 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:39.077000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 16 21:10:39.710691 systemd-resolved[1263]: Clock change detected. Flushing caches. Jan 16 21:10:39.711070 systemd-timesyncd[1464]: Contacted time server 216.250.115.174:123 (2.flatcar.pool.ntp.org). Jan 16 21:10:39.711145 systemd-timesyncd[1464]: Initial clock synchronization to Fri 2026-01-16 21:10:39.710429 UTC. Jan 16 21:10:39.711000 audit[2019]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:10:39.711000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff4038ed50 a2=0 a3=0 items=0 ppid=1850 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:39.711000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 16 21:10:39.714000 audit[2021]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:10:39.714000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffdcdff93b0 a2=0 a3=0 items=0 ppid=1850 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:39.714000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 16 21:10:39.717000 audit[2023]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:10:39.717000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffddbfc5970 a2=0 a3=0 items=0 ppid=1850 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:10:39.717000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 16 21:10:39.719352 systemd-networkd[1484]: docker0: Link UP Jan 16 21:10:39.725490 dockerd[1850]: time="2026-01-16T21:10:39.725415448Z" level=info msg="Loading containers: done." Jan 16 21:10:39.751504 dockerd[1850]: time="2026-01-16T21:10:39.751349797Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 16 21:10:39.751504 dockerd[1850]: time="2026-01-16T21:10:39.751488280Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 16 21:10:39.751757 dockerd[1850]: time="2026-01-16T21:10:39.751617179Z" level=info msg="Initializing buildkit" Jan 16 21:10:39.793861 dockerd[1850]: time="2026-01-16T21:10:39.793777636Z" level=info msg="Completed buildkit initialization" Jan 16 21:10:39.804767 dockerd[1850]: time="2026-01-16T21:10:39.804657871Z" level=info msg="Daemon has completed initialization" Jan 16 21:10:39.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:39.806124 dockerd[1850]: time="2026-01-16T21:10:39.805130932Z" level=info msg="API listen on /run/docker.sock" Jan 16 21:10:39.805422 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 16 21:10:40.280926 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2380133674-merged.mount: Deactivated successfully. Jan 16 21:10:40.742784 containerd[1584]: time="2026-01-16T21:10:40.742394352Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 16 21:10:41.534543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount485014180.mount: Deactivated successfully. Jan 16 21:10:42.809537 containerd[1584]: time="2026-01-16T21:10:42.809416798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:10:42.811200 containerd[1584]: time="2026-01-16T21:10:42.810787222Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401903" Jan 16 21:10:42.811973 containerd[1584]: time="2026-01-16T21:10:42.811927052Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:10:42.815954 containerd[1584]: time="2026-01-16T21:10:42.815890340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:10:42.817312 containerd[1584]: time="2026-01-16T21:10:42.817264585Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.074813258s" Jan 16 21:10:42.817494 containerd[1584]: time="2026-01-16T21:10:42.817468737Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 16 21:10:42.818347 containerd[1584]: time="2026-01-16T21:10:42.818319755Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 16 21:10:43.176547 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 16 21:10:43.182125 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:10:43.420794 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:10:43.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:43.422761 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 16 21:10:43.422874 kernel: audit: type=1130 audit(1768597843.420:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:43.432649 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 21:10:43.528429 kubelet[2136]: E0116 21:10:43.528309 2136 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 21:10:43.534295 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 21:10:43.534542 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 21:10:43.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:10:43.539908 systemd[1]: kubelet.service: Consumed 261ms CPU time, 110.7M memory peak. Jan 16 21:10:43.541825 kernel: audit: type=1131 audit(1768597843.535:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:10:45.173761 containerd[1584]: time="2026-01-16T21:10:45.173333177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:10:45.174996 containerd[1584]: time="2026-01-16T21:10:45.174944411Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 16 21:10:45.176838 containerd[1584]: time="2026-01-16T21:10:45.176090099Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:10:45.183654 containerd[1584]: time="2026-01-16T21:10:45.182925858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:10:45.185795 containerd[1584]: time="2026-01-16T21:10:45.184614531Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 2.366160368s" Jan 16 21:10:45.185795 containerd[1584]: time="2026-01-16T21:10:45.184678767Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 16 21:10:45.186322 containerd[1584]: time="2026-01-16T21:10:45.186281043Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 16 21:10:47.089778 containerd[1584]: time="2026-01-16T21:10:47.088723622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:10:47.091911 containerd[1584]: time="2026-01-16T21:10:47.091835474Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19399691" Jan 16 21:10:47.092999 containerd[1584]: time="2026-01-16T21:10:47.092889281Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:10:47.097769 containerd[1584]: time="2026-01-16T21:10:47.097591237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:10:47.099354 containerd[1584]: time="2026-01-16T21:10:47.098956109Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.912412276s" Jan 16 21:10:47.099354 containerd[1584]: time="2026-01-16T21:10:47.099009589Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 16 21:10:47.099514 containerd[1584]: time="2026-01-16T21:10:47.099450859Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 16 21:10:47.101023 systemd-resolved[1263]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 16 21:10:48.632751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3701325045.mount: Deactivated successfully. Jan 16 21:10:49.331334 containerd[1584]: time="2026-01-16T21:10:49.331280509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:10:49.332932 containerd[1584]: time="2026-01-16T21:10:49.332886951Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31158177" Jan 16 21:10:49.334647 containerd[1584]: time="2026-01-16T21:10:49.333776435Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:10:49.335437 containerd[1584]: time="2026-01-16T21:10:49.335400845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:10:49.336535 containerd[1584]: time="2026-01-16T21:10:49.336491966Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 2.237006043s" Jan 16 21:10:49.336658 containerd[1584]: time="2026-01-16T21:10:49.336641797Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 16 21:10:49.337618 containerd[1584]: time="2026-01-16T21:10:49.337589192Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 16 21:10:50.092170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1900703089.mount: Deactivated successfully. Jan 16 21:10:50.173957 systemd-resolved[1263]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 16 21:10:51.141558 containerd[1584]: time="2026-01-16T21:10:51.141461882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:10:51.143990 containerd[1584]: time="2026-01-16T21:10:51.143924971Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17569900" Jan 16 21:10:51.144621 containerd[1584]: time="2026-01-16T21:10:51.144574908Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:10:51.148788 containerd[1584]: time="2026-01-16T21:10:51.148702111Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.811073606s" Jan 16 21:10:51.148788 containerd[1584]: time="2026-01-16T21:10:51.148778332Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 16 21:10:51.149478 containerd[1584]: time="2026-01-16T21:10:51.148901357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:10:51.149983 containerd[1584]: time="2026-01-16T21:10:51.149929584Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 16 21:10:51.788274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1582956554.mount: Deactivated successfully. Jan 16 21:10:51.795448 containerd[1584]: time="2026-01-16T21:10:51.794789526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 21:10:51.796619 containerd[1584]: time="2026-01-16T21:10:51.796583730Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 16 21:10:51.797319 containerd[1584]: time="2026-01-16T21:10:51.797289854Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 21:10:51.799687 containerd[1584]: time="2026-01-16T21:10:51.799654401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 21:10:51.800373 containerd[1584]: time="2026-01-16T21:10:51.800339118Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 650.371358ms" Jan 16 21:10:51.800426 containerd[1584]: time="2026-01-16T21:10:51.800377707Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 16 21:10:51.801017 containerd[1584]: time="2026-01-16T21:10:51.800885893Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 16 21:10:52.647612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4268023777.mount: Deactivated successfully. Jan 16 21:10:53.676262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 16 21:10:53.679414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:10:53.887326 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:10:53.893031 kernel: audit: type=1130 audit(1768597853.887:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:53.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:10:53.908221 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 21:10:53.990172 kubelet[2271]: E0116 21:10:53.989903 2271 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 21:10:53.993977 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 21:10:53.994215 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 21:10:53.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:10:53.994831 systemd[1]: kubelet.service: Consumed 236ms CPU time, 107.8M memory peak. Jan 16 21:10:54.000808 kernel: audit: type=1131 audit(1768597853.994:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:10:56.531601 containerd[1584]: time="2026-01-16T21:10:56.530364429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:10:56.531601 containerd[1584]: time="2026-01-16T21:10:56.531553792Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55742349" Jan 16 21:10:56.532334 containerd[1584]: time="2026-01-16T21:10:56.532300867Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:10:56.535613 containerd[1584]: time="2026-01-16T21:10:56.535557139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:10:56.536961 containerd[1584]: time="2026-01-16T21:10:56.536916089Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.735993055s" Jan 16 21:10:56.536961 containerd[1584]: time="2026-01-16T21:10:56.536962420Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 16 21:11:00.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:11:00.134599 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:11:00.134806 systemd[1]: kubelet.service: Consumed 236ms CPU time, 107.8M memory peak. Jan 16 21:11:00.138793 kernel: audit: type=1130 audit(1768597860.134:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:11:00.139531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:11:00.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:11:00.147767 kernel: audit: type=1131 audit(1768597860.134:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:11:00.183971 systemd[1]: Reload requested from client PID 2312 ('systemctl') (unit session-8.scope)... Jan 16 21:11:00.184001 systemd[1]: Reloading... Jan 16 21:11:00.305821 zram_generator::config[2354]: No configuration found. Jan 16 21:11:00.631486 systemd[1]: Reloading finished in 447 ms. Jan 16 21:11:00.655563 kernel: audit: type=1334 audit(1768597860.652:289): prog-id=61 op=LOAD Jan 16 21:11:00.652000 audit: BPF prog-id=61 op=LOAD Jan 16 21:11:00.654000 audit: BPF prog-id=62 op=LOAD Jan 16 21:11:00.659773 kernel: audit: type=1334 audit(1768597860.654:290): prog-id=62 op=LOAD Jan 16 21:11:00.654000 audit: BPF prog-id=54 op=UNLOAD Jan 16 21:11:00.663770 kernel: audit: type=1334 audit(1768597860.654:291): prog-id=54 op=UNLOAD Jan 16 21:11:00.663871 kernel: audit: type=1334 audit(1768597860.654:292): prog-id=55 op=UNLOAD Jan 16 21:11:00.654000 audit: BPF prog-id=55 op=UNLOAD Jan 16 21:11:00.656000 audit: BPF prog-id=63 op=LOAD Jan 16 21:11:00.666885 kernel: audit: type=1334 audit(1768597860.656:293): prog-id=63 op=LOAD Jan 16 21:11:00.666977 kernel: audit: type=1334 audit(1768597860.656:294): prog-id=42 op=UNLOAD Jan 16 21:11:00.656000 audit: BPF prog-id=42 op=UNLOAD Jan 16 21:11:00.660000 audit: BPF prog-id=64 op=LOAD Jan 16 21:11:00.675130 kernel: audit: type=1334 audit(1768597860.660:295): prog-id=64 op=LOAD Jan 16 21:11:00.675245 kernel: audit: type=1334 audit(1768597860.660:296): prog-id=65 op=LOAD Jan 16 21:11:00.660000 audit: BPF prog-id=65 op=LOAD Jan 16 21:11:00.660000 audit: BPF prog-id=43 op=UNLOAD Jan 16 21:11:00.660000 audit: BPF prog-id=44 op=UNLOAD Jan 16 21:11:00.662000 audit: BPF prog-id=66 op=LOAD Jan 16 21:11:00.668000 audit: BPF prog-id=41 op=UNLOAD Jan 16 21:11:00.670000 audit: BPF prog-id=67 op=LOAD Jan 16 21:11:00.670000 audit: BPF prog-id=58 op=UNLOAD Jan 16 21:11:00.670000 audit: BPF prog-id=68 op=LOAD Jan 16 21:11:00.670000 audit: BPF prog-id=69 op=LOAD Jan 16 21:11:00.670000 audit: BPF prog-id=59 op=UNLOAD Jan 16 21:11:00.670000 audit: BPF prog-id=60 op=UNLOAD Jan 16 21:11:00.671000 audit: BPF prog-id=70 op=LOAD Jan 16 21:11:00.671000 audit: BPF prog-id=56 op=UNLOAD Jan 16 21:11:00.672000 audit: BPF prog-id=71 op=LOAD Jan 16 21:11:00.673000 audit: BPF prog-id=45 op=UNLOAD Jan 16 21:11:00.673000 audit: BPF prog-id=72 op=LOAD Jan 16 21:11:00.673000 audit: BPF prog-id=73 op=LOAD Jan 16 21:11:00.673000 audit: BPF prog-id=46 op=UNLOAD Jan 16 21:11:00.673000 audit: BPF prog-id=47 op=UNLOAD Jan 16 21:11:00.674000 audit: BPF prog-id=74 op=LOAD Jan 16 21:11:00.674000 audit: BPF prog-id=48 op=UNLOAD Jan 16 21:11:00.674000 audit: BPF prog-id=75 op=LOAD Jan 16 21:11:00.674000 audit: BPF prog-id=76 op=LOAD Jan 16 21:11:00.674000 audit: BPF prog-id=49 op=UNLOAD Jan 16 21:11:00.674000 audit: BPF prog-id=50 op=UNLOAD Jan 16 21:11:00.675000 audit: BPF prog-id=77 op=LOAD Jan 16 21:11:00.675000 audit: BPF prog-id=57 op=UNLOAD Jan 16 21:11:00.677000 audit: BPF prog-id=78 op=LOAD Jan 16 21:11:00.677000 audit: BPF prog-id=51 op=UNLOAD Jan 16 21:11:00.677000 audit: BPF prog-id=79 op=LOAD Jan 16 21:11:00.677000 audit: BPF prog-id=80 op=LOAD Jan 16 21:11:00.677000 audit: BPF prog-id=52 op=UNLOAD Jan 16 21:11:00.677000 audit: BPF prog-id=53 op=UNLOAD Jan 16 21:11:00.696515 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 16 21:11:00.696648 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 16 21:11:00.697154 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:11:00.697224 systemd[1]: kubelet.service: Consumed 150ms CPU time, 98.5M memory peak. Jan 16 21:11:00.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:11:00.699414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:11:00.880092 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:11:00.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:11:00.894362 (kubelet)[2412]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 21:11:00.953242 kubelet[2412]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 21:11:00.953242 kubelet[2412]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 16 21:11:00.953242 kubelet[2412]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 21:11:00.953844 kubelet[2412]: I0116 21:11:00.953310 2412 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 21:11:01.453767 kubelet[2412]: I0116 21:11:01.452393 2412 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 16 21:11:01.453767 kubelet[2412]: I0116 21:11:01.452436 2412 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 21:11:01.453767 kubelet[2412]: I0116 21:11:01.452725 2412 server.go:954] "Client rotation is on, will bootstrap in background" Jan 16 21:11:01.484298 kubelet[2412]: E0116 21:11:01.484012 2412 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://137.184.190.135:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 137.184.190.135:6443: connect: connection refused" logger="UnhandledError" Jan 16 21:11:01.485020 kubelet[2412]: I0116 21:11:01.484991 2412 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 21:11:01.506211 kubelet[2412]: I0116 21:11:01.506115 2412 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 16 21:11:01.511659 kubelet[2412]: I0116 21:11:01.511617 2412 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 21:11:01.514979 kubelet[2412]: I0116 21:11:01.514852 2412 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 21:11:01.515233 kubelet[2412]: I0116 21:11:01.514948 2412 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4580.0.0-p-735bf5553b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 21:11:01.516943 kubelet[2412]: I0116 21:11:01.516879 2412 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 21:11:01.516943 kubelet[2412]: I0116 21:11:01.516928 2412 container_manager_linux.go:304] "Creating device plugin manager" Jan 16 21:11:01.518243 kubelet[2412]: I0116 21:11:01.518188 2412 state_mem.go:36] "Initialized new in-memory state store" Jan 16 21:11:01.523111 kubelet[2412]: I0116 21:11:01.522942 2412 kubelet.go:446] "Attempting to sync node with API server" Jan 16 21:11:01.523111 kubelet[2412]: I0116 21:11:01.523032 2412 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 21:11:01.523111 kubelet[2412]: I0116 21:11:01.523079 2412 kubelet.go:352] "Adding apiserver pod source" Jan 16 21:11:01.523111 kubelet[2412]: I0116 21:11:01.523098 2412 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 21:11:01.529412 kubelet[2412]: W0116 21:11:01.529357 2412 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.190.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4580.0.0-p-735bf5553b&limit=500&resourceVersion=0": dial tcp 137.184.190.135:6443: connect: connection refused Jan 16 21:11:01.530021 kubelet[2412]: E0116 21:11:01.529989 2412 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://137.184.190.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4580.0.0-p-735bf5553b&limit=500&resourceVersion=0\": dial tcp 137.184.190.135:6443: connect: connection refused" logger="UnhandledError" Jan 16 21:11:01.530716 kubelet[2412]: I0116 21:11:01.530555 2412 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 16 21:11:01.534460 kubelet[2412]: I0116 21:11:01.534303 2412 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 21:11:01.535092 kubelet[2412]: W0116 21:11:01.535066 2412 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 16 21:11:01.536605 kubelet[2412]: W0116 21:11:01.536498 2412 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.190.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 137.184.190.135:6443: connect: connection refused Jan 16 21:11:01.536605 kubelet[2412]: E0116 21:11:01.536555 2412 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://137.184.190.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.190.135:6443: connect: connection refused" logger="UnhandledError" Jan 16 21:11:01.536878 kubelet[2412]: I0116 21:11:01.536861 2412 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 16 21:11:01.536917 kubelet[2412]: I0116 21:11:01.536894 2412 server.go:1287] "Started kubelet" Jan 16 21:11:01.537085 kubelet[2412]: I0116 21:11:01.537049 2412 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 21:11:01.539807 kubelet[2412]: I0116 21:11:01.538711 2412 server.go:479] "Adding debug handlers to kubelet server" Jan 16 21:11:01.545090 kubelet[2412]: I0116 21:11:01.544532 2412 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 21:11:01.545239 kubelet[2412]: I0116 21:11:01.545114 2412 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 21:11:01.545710 kubelet[2412]: I0116 21:11:01.545680 2412 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 21:11:01.552371 kubelet[2412]: E0116 21:11:01.546987 2412 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://137.184.190.135:6443/api/v1/namespaces/default/events\": dial tcp 137.184.190.135:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4580.0.0-p-735bf5553b.188b5262776bc717 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4580.0.0-p-735bf5553b,UID:ci-4580.0.0-p-735bf5553b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4580.0.0-p-735bf5553b,},FirstTimestamp:2026-01-16 21:11:01.536876311 +0000 UTC m=+0.638172514,LastTimestamp:2026-01-16 21:11:01.536876311 +0000 UTC m=+0.638172514,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4580.0.0-p-735bf5553b,}" Jan 16 21:11:01.552858 kubelet[2412]: I0116 21:11:01.552833 2412 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 21:11:01.554000 audit[2423]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2423 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:01.554000 audit[2423]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffca546b8d0 a2=0 a3=0 items=0 ppid=2412 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:01.554000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 16 21:11:01.556000 audit[2424]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2424 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:01.556000 audit[2424]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd9378e40 a2=0 a3=0 items=0 ppid=2412 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:01.556000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 16 21:11:01.558850 kubelet[2412]: I0116 21:11:01.558822 2412 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 16 21:11:01.559663 kubelet[2412]: E0116 21:11:01.559630 2412 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4580.0.0-p-735bf5553b\" not found" Jan 16 21:11:01.560624 kubelet[2412]: E0116 21:11:01.560578 2412 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.190.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4580.0.0-p-735bf5553b?timeout=10s\": dial tcp 137.184.190.135:6443: connect: connection refused" interval="200ms" Jan 16 21:11:01.560716 kubelet[2412]: I0116 21:11:01.560669 2412 reconciler.go:26] "Reconciler: start to sync state" Jan 16 21:11:01.560716 kubelet[2412]: I0116 21:11:01.560703 2412 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 16 21:11:01.560000 audit[2426]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2426 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:01.560000 audit[2426]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffff07e2ee0 a2=0 a3=0 items=0 ppid=2412 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:01.560000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 21:11:01.563000 audit[2428]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:01.563000 audit[2428]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff7f8a5720 a2=0 a3=0 items=0 ppid=2412 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:01.563000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 21:11:01.569884 kubelet[2412]: W0116 21:11:01.569285 2412 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://137.184.190.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.190.135:6443: connect: connection refused Jan 16 21:11:01.569884 kubelet[2412]: E0116 21:11:01.569356 2412 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://137.184.190.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.190.135:6443: connect: connection refused" logger="UnhandledError" Jan 16 21:11:01.571616 kubelet[2412]: I0116 21:11:01.570912 2412 factory.go:221] Registration of the systemd container factory successfully Jan 16 21:11:01.571616 kubelet[2412]: I0116 21:11:01.571038 2412 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 21:11:01.581759 kubelet[2412]: I0116 21:11:01.579984 2412 factory.go:221] Registration of the containerd container factory successfully Jan 16 21:11:01.583000 audit[2431]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2431 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:01.583000 audit[2431]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc2ec405c0 a2=0 a3=0 items=0 ppid=2412 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:01.583000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 16 21:11:01.584775 kubelet[2412]: I0116 21:11:01.584705 2412 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 21:11:01.586000 audit[2432]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2432 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:01.586000 audit[2432]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffce2b40610 a2=0 a3=0 items=0 ppid=2412 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:01.586000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 16 21:11:01.588020 kubelet[2412]: I0116 21:11:01.587992 2412 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 21:11:01.588073 kubelet[2412]: I0116 21:11:01.588030 2412 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 16 21:11:01.588073 kubelet[2412]: I0116 21:11:01.588060 2412 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 16 21:11:01.588073 kubelet[2412]: I0116 21:11:01.588070 2412 kubelet.go:2382] "Starting kubelet main sync loop" Jan 16 21:11:01.588183 kubelet[2412]: E0116 21:11:01.588141 2412 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 21:11:01.591000 audit[2434]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2434 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:01.591000 audit[2434]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffee2184e80 a2=0 a3=0 items=0 ppid=2412 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:01.591000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 16 21:11:01.594000 audit[2435]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2435 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:01.594000 audit[2435]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe139277a0 a2=0 a3=0 items=0 ppid=2412 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:01.594000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 16 21:11:01.597000 audit[2436]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=2436 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:01.597000 audit[2436]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd33f78590 a2=0 a3=0 items=0 ppid=2412 pid=2436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:01.597000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 16 21:11:01.598000 audit[2437]: NETFILTER_CFG table=filter:51 family=10 entries=1 op=nft_register_chain pid=2437 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:01.598000 audit[2437]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe062748b0 a2=0 a3=0 items=0 ppid=2412 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:01.598000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 16 21:11:01.601000 audit[2438]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=2438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:01.601000 audit[2438]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb2cf1040 a2=0 a3=0 items=0 ppid=2412 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:01.601000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 16 21:11:01.603000 audit[2439]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2439 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:01.603000 audit[2439]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe7fcb56c0 a2=0 a3=0 items=0 ppid=2412 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:01.603000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 16 21:11:01.607471 kubelet[2412]: W0116 21:11:01.607405 2412 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.190.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.190.135:6443: connect: connection refused Jan 16 21:11:01.607606 kubelet[2412]: E0116 21:11:01.607480 2412 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://137.184.190.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.190.135:6443: connect: connection refused" logger="UnhandledError" Jan 16 21:11:01.608885 kubelet[2412]: E0116 21:11:01.608846 2412 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 21:11:01.617870 kubelet[2412]: I0116 21:11:01.617818 2412 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 16 21:11:01.617870 kubelet[2412]: I0116 21:11:01.617847 2412 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 16 21:11:01.617870 kubelet[2412]: I0116 21:11:01.617877 2412 state_mem.go:36] "Initialized new in-memory state store" Jan 16 21:11:01.621577 kubelet[2412]: I0116 21:11:01.621531 2412 policy_none.go:49] "None policy: Start" Jan 16 21:11:01.621577 kubelet[2412]: I0116 21:11:01.621578 2412 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 16 21:11:01.621786 kubelet[2412]: I0116 21:11:01.621597 2412 state_mem.go:35] "Initializing new in-memory state store" Jan 16 21:11:01.630283 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 16 21:11:01.644257 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 16 21:11:01.649259 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 16 21:11:01.660250 kubelet[2412]: E0116 21:11:01.660189 2412 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4580.0.0-p-735bf5553b\" not found" Jan 16 21:11:01.669176 kubelet[2412]: I0116 21:11:01.669122 2412 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 21:11:01.669395 kubelet[2412]: I0116 21:11:01.669366 2412 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 21:11:01.669439 kubelet[2412]: I0116 21:11:01.669393 2412 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 21:11:01.671647 kubelet[2412]: I0116 21:11:01.670988 2412 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 21:11:01.674237 kubelet[2412]: E0116 21:11:01.674112 2412 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 16 21:11:01.674425 kubelet[2412]: E0116 21:11:01.674259 2412 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4580.0.0-p-735bf5553b\" not found" Jan 16 21:11:01.700681 systemd[1]: Created slice kubepods-burstable-pode5ecd0d08be0c27ef085da38bccd2531.slice - libcontainer container kubepods-burstable-pode5ecd0d08be0c27ef085da38bccd2531.slice. Jan 16 21:11:01.732903 kubelet[2412]: E0116 21:11:01.732726 2412 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580.0.0-p-735bf5553b\" not found" node="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:01.739851 systemd[1]: Created slice kubepods-burstable-podf3dcc1b88e3d26d585f331fead3e2705.slice - libcontainer container kubepods-burstable-podf3dcc1b88e3d26d585f331fead3e2705.slice. Jan 16 21:11:01.743776 kubelet[2412]: E0116 21:11:01.743401 2412 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580.0.0-p-735bf5553b\" not found" node="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:01.748148 systemd[1]: Created slice kubepods-burstable-pod57017fd5845344be1aca69aa1a35778b.slice - libcontainer container kubepods-burstable-pod57017fd5845344be1aca69aa1a35778b.slice. Jan 16 21:11:01.751338 kubelet[2412]: E0116 21:11:01.751003 2412 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580.0.0-p-735bf5553b\" not found" node="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:01.761564 kubelet[2412]: I0116 21:11:01.761510 2412 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f3dcc1b88e3d26d585f331fead3e2705-k8s-certs\") pod \"kube-apiserver-ci-4580.0.0-p-735bf5553b\" (UID: \"f3dcc1b88e3d26d585f331fead3e2705\") " pod="kube-system/kube-apiserver-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:01.761916 kubelet[2412]: E0116 21:11:01.761870 2412 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.190.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4580.0.0-p-735bf5553b?timeout=10s\": dial tcp 137.184.190.135:6443: connect: connection refused" interval="400ms" Jan 16 21:11:01.762098 kubelet[2412]: I0116 21:11:01.761989 2412 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57017fd5845344be1aca69aa1a35778b-ca-certs\") pod \"kube-controller-manager-ci-4580.0.0-p-735bf5553b\" (UID: \"57017fd5845344be1aca69aa1a35778b\") " pod="kube-system/kube-controller-manager-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:01.762098 kubelet[2412]: I0116 21:11:01.762017 2412 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/57017fd5845344be1aca69aa1a35778b-flexvolume-dir\") pod \"kube-controller-manager-ci-4580.0.0-p-735bf5553b\" (UID: \"57017fd5845344be1aca69aa1a35778b\") " pod="kube-system/kube-controller-manager-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:01.762445 kubelet[2412]: I0116 21:11:01.762310 2412 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/57017fd5845344be1aca69aa1a35778b-kubeconfig\") pod \"kube-controller-manager-ci-4580.0.0-p-735bf5553b\" (UID: \"57017fd5845344be1aca69aa1a35778b\") " pod="kube-system/kube-controller-manager-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:01.762445 kubelet[2412]: I0116 21:11:01.762397 2412 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f3dcc1b88e3d26d585f331fead3e2705-ca-certs\") pod \"kube-apiserver-ci-4580.0.0-p-735bf5553b\" (UID: \"f3dcc1b88e3d26d585f331fead3e2705\") " pod="kube-system/kube-apiserver-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:01.762621 kubelet[2412]: I0116 21:11:01.762426 2412 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3dcc1b88e3d26d585f331fead3e2705-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4580.0.0-p-735bf5553b\" (UID: \"f3dcc1b88e3d26d585f331fead3e2705\") " pod="kube-system/kube-apiserver-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:01.762621 kubelet[2412]: I0116 21:11:01.762592 2412 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57017fd5845344be1aca69aa1a35778b-k8s-certs\") pod \"kube-controller-manager-ci-4580.0.0-p-735bf5553b\" (UID: \"57017fd5845344be1aca69aa1a35778b\") " pod="kube-system/kube-controller-manager-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:01.762836 kubelet[2412]: I0116 21:11:01.762799 2412 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57017fd5845344be1aca69aa1a35778b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4580.0.0-p-735bf5553b\" (UID: \"57017fd5845344be1aca69aa1a35778b\") " pod="kube-system/kube-controller-manager-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:01.763040 kubelet[2412]: I0116 21:11:01.762986 2412 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5ecd0d08be0c27ef085da38bccd2531-kubeconfig\") pod \"kube-scheduler-ci-4580.0.0-p-735bf5553b\" (UID: \"e5ecd0d08be0c27ef085da38bccd2531\") " pod="kube-system/kube-scheduler-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:01.771702 kubelet[2412]: I0116 21:11:01.771621 2412 kubelet_node_status.go:75] "Attempting to register node" node="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:01.772594 kubelet[2412]: E0116 21:11:01.772551 2412 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://137.184.190.135:6443/api/v1/nodes\": dial tcp 137.184.190.135:6443: connect: connection refused" node="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:01.975235 kubelet[2412]: I0116 21:11:01.974774 2412 kubelet_node_status.go:75] "Attempting to register node" node="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:01.975718 kubelet[2412]: E0116 21:11:01.975284 2412 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://137.184.190.135:6443/api/v1/nodes\": dial tcp 137.184.190.135:6443: connect: connection refused" node="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:02.034480 kubelet[2412]: E0116 21:11:02.034175 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:02.036603 containerd[1584]: time="2026-01-16T21:11:02.036513004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4580.0.0-p-735bf5553b,Uid:e5ecd0d08be0c27ef085da38bccd2531,Namespace:kube-system,Attempt:0,}" Jan 16 21:11:02.046917 kubelet[2412]: E0116 21:11:02.046857 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:02.047564 containerd[1584]: time="2026-01-16T21:11:02.047521512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4580.0.0-p-735bf5553b,Uid:f3dcc1b88e3d26d585f331fead3e2705,Namespace:kube-system,Attempt:0,}" Jan 16 21:11:02.052656 kubelet[2412]: E0116 21:11:02.052560 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:02.054479 containerd[1584]: time="2026-01-16T21:11:02.054439753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4580.0.0-p-735bf5553b,Uid:57017fd5845344be1aca69aa1a35778b,Namespace:kube-system,Attempt:0,}" Jan 16 21:11:02.106897 kubelet[2412]: E0116 21:11:02.106607 2412 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://137.184.190.135:6443/api/v1/namespaces/default/events\": dial tcp 137.184.190.135:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4580.0.0-p-735bf5553b.188b5262776bc717 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4580.0.0-p-735bf5553b,UID:ci-4580.0.0-p-735bf5553b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4580.0.0-p-735bf5553b,},FirstTimestamp:2026-01-16 21:11:01.536876311 +0000 UTC m=+0.638172514,LastTimestamp:2026-01-16 21:11:01.536876311 +0000 UTC m=+0.638172514,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4580.0.0-p-735bf5553b,}" Jan 16 21:11:02.163579 kubelet[2412]: E0116 21:11:02.163520 2412 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.190.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4580.0.0-p-735bf5553b?timeout=10s\": dial tcp 137.184.190.135:6443: connect: connection refused" interval="800ms" Jan 16 21:11:02.172084 containerd[1584]: time="2026-01-16T21:11:02.172030783Z" level=info msg="connecting to shim 3ba180b002812eaf43f0702426f6386ff20553b054ec49d573dd4a631c4a1675" address="unix:///run/containerd/s/16919941401d63fe4e4c6fd5746420b3aebc0a1480c1776fe4a68d37bad53cf5" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:11:02.172916 containerd[1584]: time="2026-01-16T21:11:02.172881482Z" level=info msg="connecting to shim c0251bb4798bdc242d7d92ea2f8dfe8f77c297a5c42176b92f0fd591b705f2e8" address="unix:///run/containerd/s/7c96ea3162b5d30fb711ba8957ad2023f2d68a7225d4b945c2eb1dcedf16c4e5" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:11:02.176915 containerd[1584]: time="2026-01-16T21:11:02.176860406Z" level=info msg="connecting to shim 6a5261776cc3d40dd478796d4848f18015ee703fb87b5870621d5bbbf9bc2465" address="unix:///run/containerd/s/d511cea56991d2c0a0dc8e0dc4945d4bd5850257966b158f9e7f1bcd6903a46e" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:11:02.300758 systemd[1]: Started cri-containerd-6a5261776cc3d40dd478796d4848f18015ee703fb87b5870621d5bbbf9bc2465.scope - libcontainer container 6a5261776cc3d40dd478796d4848f18015ee703fb87b5870621d5bbbf9bc2465. Jan 16 21:11:02.310185 systemd[1]: Started cri-containerd-c0251bb4798bdc242d7d92ea2f8dfe8f77c297a5c42176b92f0fd591b705f2e8.scope - libcontainer container c0251bb4798bdc242d7d92ea2f8dfe8f77c297a5c42176b92f0fd591b705f2e8. Jan 16 21:11:02.322561 systemd[1]: Started cri-containerd-3ba180b002812eaf43f0702426f6386ff20553b054ec49d573dd4a631c4a1675.scope - libcontainer container 3ba180b002812eaf43f0702426f6386ff20553b054ec49d573dd4a631c4a1675. Jan 16 21:11:02.339000 audit: BPF prog-id=81 op=LOAD Jan 16 21:11:02.340000 audit: BPF prog-id=82 op=LOAD Jan 16 21:11:02.340000 audit[2502]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2471 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661353236313737366363336434306464343738373936643438343866 Jan 16 21:11:02.340000 audit: BPF prog-id=82 op=UNLOAD Jan 16 21:11:02.340000 audit[2502]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2471 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661353236313737366363336434306464343738373936643438343866 Jan 16 21:11:02.349000 audit: BPF prog-id=83 op=LOAD Jan 16 21:11:02.349000 audit[2502]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2471 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661353236313737366363336434306464343738373936643438343866 Jan 16 21:11:02.349000 audit: BPF prog-id=84 op=LOAD Jan 16 21:11:02.349000 audit[2502]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2471 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661353236313737366363336434306464343738373936643438343866 Jan 16 21:11:02.349000 audit: BPF prog-id=84 op=UNLOAD Jan 16 21:11:02.349000 audit[2502]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2471 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661353236313737366363336434306464343738373936643438343866 Jan 16 21:11:02.349000 audit: BPF prog-id=83 op=UNLOAD Jan 16 21:11:02.349000 audit[2502]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2471 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661353236313737366363336434306464343738373936643438343866 Jan 16 21:11:02.349000 audit: BPF prog-id=85 op=LOAD Jan 16 21:11:02.349000 audit[2502]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2471 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661353236313737366363336434306464343738373936643438343866 Jan 16 21:11:02.350000 audit: BPF prog-id=86 op=LOAD Jan 16 21:11:02.350000 audit: BPF prog-id=87 op=LOAD Jan 16 21:11:02.350000 audit[2498]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00020c238 a2=98 a3=0 items=0 ppid=2468 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.350000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330323531626234373938626463323432643764393265613266386466 Jan 16 21:11:02.351000 audit: BPF prog-id=87 op=UNLOAD Jan 16 21:11:02.351000 audit[2498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2468 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330323531626234373938626463323432643764393265613266386466 Jan 16 21:11:02.351000 audit: BPF prog-id=88 op=LOAD Jan 16 21:11:02.351000 audit[2498]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00020c488 a2=98 a3=0 items=0 ppid=2468 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330323531626234373938626463323432643764393265613266386466 Jan 16 21:11:02.351000 audit: BPF prog-id=89 op=LOAD Jan 16 21:11:02.351000 audit[2498]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00020c218 a2=98 a3=0 items=0 ppid=2468 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330323531626234373938626463323432643764393265613266386466 Jan 16 21:11:02.351000 audit: BPF prog-id=89 op=UNLOAD Jan 16 21:11:02.351000 audit[2498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2468 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330323531626234373938626463323432643764393265613266386466 Jan 16 21:11:02.351000 audit: BPF prog-id=88 op=UNLOAD Jan 16 21:11:02.351000 audit[2498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2468 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330323531626234373938626463323432643764393265613266386466 Jan 16 21:11:02.351000 audit: BPF prog-id=90 op=LOAD Jan 16 21:11:02.351000 audit[2498]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00020c6e8 a2=98 a3=0 items=0 ppid=2468 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330323531626234373938626463323432643764393265613266386466 Jan 16 21:11:02.368000 audit: BPF prog-id=91 op=LOAD Jan 16 21:11:02.370000 audit: BPF prog-id=92 op=LOAD Jan 16 21:11:02.370000 audit[2501]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2469 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362613138306230303238313265616634336630373032343236663633 Jan 16 21:11:02.370000 audit: BPF prog-id=92 op=UNLOAD Jan 16 21:11:02.370000 audit[2501]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2469 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362613138306230303238313265616634336630373032343236663633 Jan 16 21:11:02.372000 audit: BPF prog-id=93 op=LOAD Jan 16 21:11:02.372000 audit[2501]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2469 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362613138306230303238313265616634336630373032343236663633 Jan 16 21:11:02.372000 audit: BPF prog-id=94 op=LOAD Jan 16 21:11:02.372000 audit[2501]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2469 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362613138306230303238313265616634336630373032343236663633 Jan 16 21:11:02.372000 audit: BPF prog-id=94 op=UNLOAD Jan 16 21:11:02.372000 audit[2501]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2469 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362613138306230303238313265616634336630373032343236663633 Jan 16 21:11:02.373000 audit: BPF prog-id=93 op=UNLOAD Jan 16 21:11:02.373000 audit[2501]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2469 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362613138306230303238313265616634336630373032343236663633 Jan 16 21:11:02.373000 audit: BPF prog-id=95 op=LOAD Jan 16 21:11:02.373000 audit[2501]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2469 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362613138306230303238313265616634336630373032343236663633 Jan 16 21:11:02.379769 kubelet[2412]: I0116 21:11:02.379169 2412 kubelet_node_status.go:75] "Attempting to register node" node="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:02.379769 kubelet[2412]: E0116 21:11:02.379693 2412 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://137.184.190.135:6443/api/v1/nodes\": dial tcp 137.184.190.135:6443: connect: connection refused" node="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:02.432108 kubelet[2412]: W0116 21:11:02.431446 2412 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.190.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 137.184.190.135:6443: connect: connection refused Jan 16 21:11:02.433623 kubelet[2412]: E0116 21:11:02.433540 2412 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://137.184.190.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.190.135:6443: connect: connection refused" logger="UnhandledError" Jan 16 21:11:02.446252 containerd[1584]: time="2026-01-16T21:11:02.443140947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4580.0.0-p-735bf5553b,Uid:57017fd5845344be1aca69aa1a35778b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0251bb4798bdc242d7d92ea2f8dfe8f77c297a5c42176b92f0fd591b705f2e8\"" Jan 16 21:11:02.458769 kubelet[2412]: E0116 21:11:02.458512 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:02.469090 containerd[1584]: time="2026-01-16T21:11:02.468975960Z" level=info msg="CreateContainer within sandbox \"c0251bb4798bdc242d7d92ea2f8dfe8f77c297a5c42176b92f0fd591b705f2e8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 16 21:11:02.476131 containerd[1584]: time="2026-01-16T21:11:02.475446616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4580.0.0-p-735bf5553b,Uid:e5ecd0d08be0c27ef085da38bccd2531,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a5261776cc3d40dd478796d4848f18015ee703fb87b5870621d5bbbf9bc2465\"" Jan 16 21:11:02.478315 kubelet[2412]: E0116 21:11:02.478054 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:02.479029 kubelet[2412]: W0116 21:11:02.478712 2412 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.190.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4580.0.0-p-735bf5553b&limit=500&resourceVersion=0": dial tcp 137.184.190.135:6443: connect: connection refused Jan 16 21:11:02.479029 kubelet[2412]: E0116 21:11:02.478981 2412 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://137.184.190.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4580.0.0-p-735bf5553b&limit=500&resourceVersion=0\": dial tcp 137.184.190.135:6443: connect: connection refused" logger="UnhandledError" Jan 16 21:11:02.492779 containerd[1584]: time="2026-01-16T21:11:02.492630761Z" level=info msg="Container b825c95462e5bcc34b3927ee6c2822ee91438a936ff8dbfdd8a3804843782280: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:11:02.497439 containerd[1584]: time="2026-01-16T21:11:02.497376010Z" level=info msg="CreateContainer within sandbox \"6a5261776cc3d40dd478796d4848f18015ee703fb87b5870621d5bbbf9bc2465\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 16 21:11:02.499308 containerd[1584]: time="2026-01-16T21:11:02.499255643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4580.0.0-p-735bf5553b,Uid:f3dcc1b88e3d26d585f331fead3e2705,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ba180b002812eaf43f0702426f6386ff20553b054ec49d573dd4a631c4a1675\"" Jan 16 21:11:02.500690 kubelet[2412]: E0116 21:11:02.500569 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:02.503931 containerd[1584]: time="2026-01-16T21:11:02.503868818Z" level=info msg="CreateContainer within sandbox \"3ba180b002812eaf43f0702426f6386ff20553b054ec49d573dd4a631c4a1675\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 16 21:11:02.508363 containerd[1584]: time="2026-01-16T21:11:02.508306804Z" level=info msg="CreateContainer within sandbox \"c0251bb4798bdc242d7d92ea2f8dfe8f77c297a5c42176b92f0fd591b705f2e8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b825c95462e5bcc34b3927ee6c2822ee91438a936ff8dbfdd8a3804843782280\"" Jan 16 21:11:02.510266 containerd[1584]: time="2026-01-16T21:11:02.510182202Z" level=info msg="StartContainer for \"b825c95462e5bcc34b3927ee6c2822ee91438a936ff8dbfdd8a3804843782280\"" Jan 16 21:11:02.512089 containerd[1584]: time="2026-01-16T21:11:02.511987434Z" level=info msg="connecting to shim b825c95462e5bcc34b3927ee6c2822ee91438a936ff8dbfdd8a3804843782280" address="unix:///run/containerd/s/7c96ea3162b5d30fb711ba8957ad2023f2d68a7225d4b945c2eb1dcedf16c4e5" protocol=ttrpc version=3 Jan 16 21:11:02.517569 containerd[1584]: time="2026-01-16T21:11:02.517379219Z" level=info msg="Container 0d244e29bd72233f81f48273c0ea48091bea321480174f064294cc11a0620f93: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:11:02.517889 containerd[1584]: time="2026-01-16T21:11:02.517852494Z" level=info msg="Container 7695460ca564f1882c8da7b0800fec152bde753791c73075ccf645185183553a: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:11:02.526238 containerd[1584]: time="2026-01-16T21:11:02.526185235Z" level=info msg="CreateContainer within sandbox \"3ba180b002812eaf43f0702426f6386ff20553b054ec49d573dd4a631c4a1675\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0d244e29bd72233f81f48273c0ea48091bea321480174f064294cc11a0620f93\"" Jan 16 21:11:02.527082 containerd[1584]: time="2026-01-16T21:11:02.527049022Z" level=info msg="StartContainer for \"0d244e29bd72233f81f48273c0ea48091bea321480174f064294cc11a0620f93\"" Jan 16 21:11:02.528496 containerd[1584]: time="2026-01-16T21:11:02.528443440Z" level=info msg="connecting to shim 0d244e29bd72233f81f48273c0ea48091bea321480174f064294cc11a0620f93" address="unix:///run/containerd/s/16919941401d63fe4e4c6fd5746420b3aebc0a1480c1776fe4a68d37bad53cf5" protocol=ttrpc version=3 Jan 16 21:11:02.531969 containerd[1584]: time="2026-01-16T21:11:02.531930802Z" level=info msg="CreateContainer within sandbox \"6a5261776cc3d40dd478796d4848f18015ee703fb87b5870621d5bbbf9bc2465\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7695460ca564f1882c8da7b0800fec152bde753791c73075ccf645185183553a\"" Jan 16 21:11:02.534487 containerd[1584]: time="2026-01-16T21:11:02.533893099Z" level=info msg="StartContainer for \"7695460ca564f1882c8da7b0800fec152bde753791c73075ccf645185183553a\"" Jan 16 21:11:02.539703 containerd[1584]: time="2026-01-16T21:11:02.539654539Z" level=info msg="connecting to shim 7695460ca564f1882c8da7b0800fec152bde753791c73075ccf645185183553a" address="unix:///run/containerd/s/d511cea56991d2c0a0dc8e0dc4945d4bd5850257966b158f9e7f1bcd6903a46e" protocol=ttrpc version=3 Jan 16 21:11:02.551011 systemd[1]: Started cri-containerd-b825c95462e5bcc34b3927ee6c2822ee91438a936ff8dbfdd8a3804843782280.scope - libcontainer container b825c95462e5bcc34b3927ee6c2822ee91438a936ff8dbfdd8a3804843782280. Jan 16 21:11:02.568198 systemd[1]: Started cri-containerd-0d244e29bd72233f81f48273c0ea48091bea321480174f064294cc11a0620f93.scope - libcontainer container 0d244e29bd72233f81f48273c0ea48091bea321480174f064294cc11a0620f93. Jan 16 21:11:02.585098 systemd[1]: Started cri-containerd-7695460ca564f1882c8da7b0800fec152bde753791c73075ccf645185183553a.scope - libcontainer container 7695460ca564f1882c8da7b0800fec152bde753791c73075ccf645185183553a. Jan 16 21:11:02.589000 audit: BPF prog-id=96 op=LOAD Jan 16 21:11:02.590000 audit: BPF prog-id=97 op=LOAD Jan 16 21:11:02.590000 audit[2583]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2468 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.590000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238323563393534363265356263633334623339323765653663323832 Jan 16 21:11:02.590000 audit: BPF prog-id=97 op=UNLOAD Jan 16 21:11:02.590000 audit[2583]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2468 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.590000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238323563393534363265356263633334623339323765653663323832 Jan 16 21:11:02.590000 audit: BPF prog-id=98 op=LOAD Jan 16 21:11:02.590000 audit[2583]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2468 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.590000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238323563393534363265356263633334623339323765653663323832 Jan 16 21:11:02.590000 audit: BPF prog-id=99 op=LOAD Jan 16 21:11:02.590000 audit[2583]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2468 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.590000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238323563393534363265356263633334623339323765653663323832 Jan 16 21:11:02.590000 audit: BPF prog-id=99 op=UNLOAD Jan 16 21:11:02.590000 audit[2583]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2468 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.590000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238323563393534363265356263633334623339323765653663323832 Jan 16 21:11:02.590000 audit: BPF prog-id=98 op=UNLOAD Jan 16 21:11:02.590000 audit[2583]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2468 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.590000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238323563393534363265356263633334623339323765653663323832 Jan 16 21:11:02.591000 audit: BPF prog-id=100 op=LOAD Jan 16 21:11:02.591000 audit[2583]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2468 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.591000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238323563393534363265356263633334623339323765653663323832 Jan 16 21:11:02.597000 audit: BPF prog-id=101 op=LOAD Jan 16 21:11:02.598000 audit: BPF prog-id=102 op=LOAD Jan 16 21:11:02.598000 audit[2592]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2469 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064323434653239626437323233336638316634383237336330656134 Jan 16 21:11:02.598000 audit: BPF prog-id=102 op=UNLOAD Jan 16 21:11:02.598000 audit[2592]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2469 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064323434653239626437323233336638316634383237336330656134 Jan 16 21:11:02.598000 audit: BPF prog-id=103 op=LOAD Jan 16 21:11:02.598000 audit[2592]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2469 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064323434653239626437323233336638316634383237336330656134 Jan 16 21:11:02.599000 audit: BPF prog-id=104 op=LOAD Jan 16 21:11:02.599000 audit[2592]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2469 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064323434653239626437323233336638316634383237336330656134 Jan 16 21:11:02.599000 audit: BPF prog-id=104 op=UNLOAD Jan 16 21:11:02.599000 audit[2592]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2469 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064323434653239626437323233336638316634383237336330656134 Jan 16 21:11:02.599000 audit: BPF prog-id=103 op=UNLOAD Jan 16 21:11:02.599000 audit[2592]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2469 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064323434653239626437323233336638316634383237336330656134 Jan 16 21:11:02.599000 audit: BPF prog-id=105 op=LOAD Jan 16 21:11:02.599000 audit[2592]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2469 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064323434653239626437323233336638316634383237336330656134 Jan 16 21:11:02.616000 audit: BPF prog-id=106 op=LOAD Jan 16 21:11:02.617000 audit: BPF prog-id=107 op=LOAD Jan 16 21:11:02.617000 audit[2600]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2471 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.617000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736393534363063613536346631383832633864613762303830306665 Jan 16 21:11:02.617000 audit: BPF prog-id=107 op=UNLOAD Jan 16 21:11:02.617000 audit[2600]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2471 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.617000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736393534363063613536346631383832633864613762303830306665 Jan 16 21:11:02.619000 audit: BPF prog-id=108 op=LOAD Jan 16 21:11:02.619000 audit[2600]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2471 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736393534363063613536346631383832633864613762303830306665 Jan 16 21:11:02.620000 audit: BPF prog-id=109 op=LOAD Jan 16 21:11:02.620000 audit[2600]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2471 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736393534363063613536346631383832633864613762303830306665 Jan 16 21:11:02.620000 audit: BPF prog-id=109 op=UNLOAD Jan 16 21:11:02.620000 audit[2600]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2471 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736393534363063613536346631383832633864613762303830306665 Jan 16 21:11:02.621000 audit: BPF prog-id=108 op=UNLOAD Jan 16 21:11:02.621000 audit[2600]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2471 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736393534363063613536346631383832633864613762303830306665 Jan 16 21:11:02.621000 audit: BPF prog-id=110 op=LOAD Jan 16 21:11:02.621000 audit[2600]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2471 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:02.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736393534363063613536346631383832633864613762303830306665 Jan 16 21:11:02.697526 containerd[1584]: time="2026-01-16T21:11:02.697425041Z" level=info msg="StartContainer for \"0d244e29bd72233f81f48273c0ea48091bea321480174f064294cc11a0620f93\" returns successfully" Jan 16 21:11:02.714629 containerd[1584]: time="2026-01-16T21:11:02.713528440Z" level=info msg="StartContainer for \"7695460ca564f1882c8da7b0800fec152bde753791c73075ccf645185183553a\" returns successfully" Jan 16 21:11:02.724278 containerd[1584]: time="2026-01-16T21:11:02.724222579Z" level=info msg="StartContainer for \"b825c95462e5bcc34b3927ee6c2822ee91438a936ff8dbfdd8a3804843782280\" returns successfully" Jan 16 21:11:02.965151 kubelet[2412]: E0116 21:11:02.965078 2412 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.190.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4580.0.0-p-735bf5553b?timeout=10s\": dial tcp 137.184.190.135:6443: connect: connection refused" interval="1.6s" Jan 16 21:11:03.181517 kubelet[2412]: I0116 21:11:03.181460 2412 kubelet_node_status.go:75] "Attempting to register node" node="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:03.633481 kubelet[2412]: E0116 21:11:03.633441 2412 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580.0.0-p-735bf5553b\" not found" node="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:03.633671 kubelet[2412]: E0116 21:11:03.633606 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:03.638634 kubelet[2412]: E0116 21:11:03.638580 2412 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580.0.0-p-735bf5553b\" not found" node="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:03.638841 kubelet[2412]: E0116 21:11:03.638802 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:03.642834 kubelet[2412]: E0116 21:11:03.642587 2412 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580.0.0-p-735bf5553b\" not found" node="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:03.643558 kubelet[2412]: E0116 21:11:03.643477 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:04.646979 kubelet[2412]: E0116 21:11:04.646932 2412 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580.0.0-p-735bf5553b\" not found" node="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:04.647522 kubelet[2412]: E0116 21:11:04.647118 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:04.647522 kubelet[2412]: E0116 21:11:04.647348 2412 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580.0.0-p-735bf5553b\" not found" node="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:04.647522 kubelet[2412]: E0116 21:11:04.647418 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:04.649049 kubelet[2412]: E0116 21:11:04.649020 2412 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580.0.0-p-735bf5553b\" not found" node="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:04.649186 kubelet[2412]: E0116 21:11:04.649144 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:05.072586 kubelet[2412]: E0116 21:11:05.072526 2412 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4580.0.0-p-735bf5553b\" not found" node="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:05.146321 kubelet[2412]: I0116 21:11:05.146249 2412 kubelet_node_status.go:78] "Successfully registered node" node="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:05.160721 kubelet[2412]: I0116 21:11:05.160662 2412 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:05.246761 kubelet[2412]: E0116 21:11:05.245080 2412 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4580.0.0-p-735bf5553b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:05.246761 kubelet[2412]: I0116 21:11:05.245120 2412 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:05.257459 kubelet[2412]: E0116 21:11:05.257396 2412 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4580.0.0-p-735bf5553b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:05.257459 kubelet[2412]: I0116 21:11:05.257451 2412 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:05.267029 kubelet[2412]: E0116 21:11:05.266973 2412 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4580.0.0-p-735bf5553b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:05.539761 kubelet[2412]: I0116 21:11:05.539660 2412 apiserver.go:52] "Watching apiserver" Jan 16 21:11:05.561990 kubelet[2412]: I0116 21:11:05.561879 2412 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 16 21:11:05.647385 kubelet[2412]: I0116 21:11:05.647349 2412 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:05.649325 kubelet[2412]: I0116 21:11:05.649257 2412 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:05.649912 kubelet[2412]: E0116 21:11:05.649891 2412 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4580.0.0-p-735bf5553b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:05.650233 kubelet[2412]: E0116 21:11:05.650199 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:05.652840 kubelet[2412]: E0116 21:11:05.652768 2412 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4580.0.0-p-735bf5553b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:05.653202 kubelet[2412]: E0116 21:11:05.653128 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:05.930439 kubelet[2412]: I0116 21:11:05.930388 2412 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:05.941507 kubelet[2412]: W0116 21:11:05.941047 2412 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 21:11:05.941507 kubelet[2412]: E0116 21:11:05.941399 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:06.650257 kubelet[2412]: E0116 21:11:06.650195 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:07.373323 systemd[1]: Reload requested from client PID 2680 ('systemctl') (unit session-8.scope)... Jan 16 21:11:07.373344 systemd[1]: Reloading... Jan 16 21:11:07.507771 zram_generator::config[2722]: No configuration found. Jan 16 21:11:07.910401 systemd[1]: Reloading finished in 536 ms. Jan 16 21:11:07.951167 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:11:07.967612 systemd[1]: kubelet.service: Deactivated successfully. Jan 16 21:11:07.968038 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:11:07.973802 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 16 21:11:07.973886 kernel: audit: type=1131 audit(1768597867.967:391): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:11:07.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:11:07.968143 systemd[1]: kubelet.service: Consumed 1.144s CPU time, 127.1M memory peak. Jan 16 21:11:07.972872 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:11:07.975000 audit: BPF prog-id=111 op=LOAD Jan 16 21:11:07.980061 kernel: audit: type=1334 audit(1768597867.975:392): prog-id=111 op=LOAD Jan 16 21:11:07.980189 kernel: audit: type=1334 audit(1768597867.975:393): prog-id=78 op=UNLOAD Jan 16 21:11:07.975000 audit: BPF prog-id=78 op=UNLOAD Jan 16 21:11:07.982279 kernel: audit: type=1334 audit(1768597867.975:394): prog-id=112 op=LOAD Jan 16 21:11:07.975000 audit: BPF prog-id=112 op=LOAD Jan 16 21:11:07.983964 kernel: audit: type=1334 audit(1768597867.975:395): prog-id=113 op=LOAD Jan 16 21:11:07.975000 audit: BPF prog-id=113 op=LOAD Jan 16 21:11:07.985505 kernel: audit: type=1334 audit(1768597867.975:396): prog-id=79 op=UNLOAD Jan 16 21:11:07.975000 audit: BPF prog-id=79 op=UNLOAD Jan 16 21:11:07.987458 kernel: audit: type=1334 audit(1768597867.975:397): prog-id=80 op=UNLOAD Jan 16 21:11:07.975000 audit: BPF prog-id=80 op=UNLOAD Jan 16 21:11:07.989512 kernel: audit: type=1334 audit(1768597867.979:398): prog-id=114 op=LOAD Jan 16 21:11:07.979000 audit: BPF prog-id=114 op=LOAD Jan 16 21:11:07.991301 kernel: audit: type=1334 audit(1768597867.979:399): prog-id=74 op=UNLOAD Jan 16 21:11:07.979000 audit: BPF prog-id=74 op=UNLOAD Jan 16 21:11:07.993281 kernel: audit: type=1334 audit(1768597867.979:400): prog-id=115 op=LOAD Jan 16 21:11:07.979000 audit: BPF prog-id=115 op=LOAD Jan 16 21:11:07.979000 audit: BPF prog-id=116 op=LOAD Jan 16 21:11:07.979000 audit: BPF prog-id=75 op=UNLOAD Jan 16 21:11:07.979000 audit: BPF prog-id=76 op=UNLOAD Jan 16 21:11:07.982000 audit: BPF prog-id=117 op=LOAD Jan 16 21:11:07.982000 audit: BPF prog-id=77 op=UNLOAD Jan 16 21:11:07.989000 audit: BPF prog-id=118 op=LOAD Jan 16 21:11:07.989000 audit: BPF prog-id=66 op=UNLOAD Jan 16 21:11:07.992000 audit: BPF prog-id=119 op=LOAD Jan 16 21:11:07.992000 audit: BPF prog-id=67 op=UNLOAD Jan 16 21:11:07.992000 audit: BPF prog-id=120 op=LOAD Jan 16 21:11:07.993000 audit: BPF prog-id=121 op=LOAD Jan 16 21:11:07.993000 audit: BPF prog-id=68 op=UNLOAD Jan 16 21:11:07.993000 audit: BPF prog-id=69 op=UNLOAD Jan 16 21:11:07.994000 audit: BPF prog-id=122 op=LOAD Jan 16 21:11:07.994000 audit: BPF prog-id=71 op=UNLOAD Jan 16 21:11:07.994000 audit: BPF prog-id=123 op=LOAD Jan 16 21:11:07.994000 audit: BPF prog-id=124 op=LOAD Jan 16 21:11:07.994000 audit: BPF prog-id=72 op=UNLOAD Jan 16 21:11:07.994000 audit: BPF prog-id=73 op=UNLOAD Jan 16 21:11:08.004000 audit: BPF prog-id=125 op=LOAD Jan 16 21:11:08.004000 audit: BPF prog-id=63 op=UNLOAD Jan 16 21:11:08.004000 audit: BPF prog-id=126 op=LOAD Jan 16 21:11:08.004000 audit: BPF prog-id=127 op=LOAD Jan 16 21:11:08.004000 audit: BPF prog-id=64 op=UNLOAD Jan 16 21:11:08.004000 audit: BPF prog-id=65 op=UNLOAD Jan 16 21:11:08.004000 audit: BPF prog-id=128 op=LOAD Jan 16 21:11:08.004000 audit: BPF prog-id=129 op=LOAD Jan 16 21:11:08.004000 audit: BPF prog-id=61 op=UNLOAD Jan 16 21:11:08.004000 audit: BPF prog-id=62 op=UNLOAD Jan 16 21:11:08.005000 audit: BPF prog-id=130 op=LOAD Jan 16 21:11:08.005000 audit: BPF prog-id=70 op=UNLOAD Jan 16 21:11:08.226775 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:11:08.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:11:08.242513 (kubelet)[2777]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 21:11:08.348765 kubelet[2777]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 21:11:08.348765 kubelet[2777]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 16 21:11:08.348765 kubelet[2777]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 21:11:08.348765 kubelet[2777]: I0116 21:11:08.348483 2777 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 21:11:08.366765 kubelet[2777]: I0116 21:11:08.366695 2777 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 16 21:11:08.367755 kubelet[2777]: I0116 21:11:08.366995 2777 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 21:11:08.367755 kubelet[2777]: I0116 21:11:08.367525 2777 server.go:954] "Client rotation is on, will bootstrap in background" Jan 16 21:11:08.372023 kubelet[2777]: I0116 21:11:08.371974 2777 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 16 21:11:08.377557 kubelet[2777]: I0116 21:11:08.377511 2777 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 21:11:08.388896 kubelet[2777]: I0116 21:11:08.388849 2777 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 16 21:11:08.395768 kubelet[2777]: I0116 21:11:08.394989 2777 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 21:11:08.396375 kubelet[2777]: I0116 21:11:08.396313 2777 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 21:11:08.398299 kubelet[2777]: I0116 21:11:08.397777 2777 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4580.0.0-p-735bf5553b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 21:11:08.398603 kubelet[2777]: I0116 21:11:08.398582 2777 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 21:11:08.398716 kubelet[2777]: I0116 21:11:08.398703 2777 container_manager_linux.go:304] "Creating device plugin manager" Jan 16 21:11:08.398912 kubelet[2777]: I0116 21:11:08.398897 2777 state_mem.go:36] "Initialized new in-memory state store" Jan 16 21:11:08.399244 kubelet[2777]: I0116 21:11:08.399220 2777 kubelet.go:446] "Attempting to sync node with API server" Jan 16 21:11:08.399824 kubelet[2777]: I0116 21:11:08.399799 2777 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 21:11:08.401770 kubelet[2777]: I0116 21:11:08.399965 2777 kubelet.go:352] "Adding apiserver pod source" Jan 16 21:11:08.401770 kubelet[2777]: I0116 21:11:08.399988 2777 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 21:11:08.404466 kubelet[2777]: I0116 21:11:08.404430 2777 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 16 21:11:08.408785 kubelet[2777]: I0116 21:11:08.408717 2777 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 21:11:08.409794 kubelet[2777]: I0116 21:11:08.409765 2777 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 16 21:11:08.410047 kubelet[2777]: I0116 21:11:08.410032 2777 server.go:1287] "Started kubelet" Jan 16 21:11:08.429296 kubelet[2777]: I0116 21:11:08.428808 2777 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 21:11:08.461649 kubelet[2777]: I0116 21:11:08.429589 2777 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 21:11:08.465884 kubelet[2777]: I0116 21:11:08.465840 2777 server.go:479] "Adding debug handlers to kubelet server" Jan 16 21:11:08.467776 kubelet[2777]: I0116 21:11:08.430265 2777 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 21:11:08.468711 kubelet[2777]: I0116 21:11:08.468655 2777 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 16 21:11:08.470305 kubelet[2777]: I0116 21:11:08.469159 2777 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 16 21:11:08.470305 kubelet[2777]: I0116 21:11:08.429719 2777 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 21:11:08.471381 kubelet[2777]: I0116 21:11:08.471326 2777 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 21:11:08.474768 kubelet[2777]: I0116 21:11:08.474619 2777 reconciler.go:26] "Reconciler: start to sync state" Jan 16 21:11:08.480562 kubelet[2777]: E0116 21:11:08.479108 2777 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 21:11:08.489938 kubelet[2777]: I0116 21:11:08.489857 2777 factory.go:221] Registration of the containerd container factory successfully Jan 16 21:11:08.489938 kubelet[2777]: I0116 21:11:08.489889 2777 factory.go:221] Registration of the systemd container factory successfully Jan 16 21:11:08.491595 kubelet[2777]: I0116 21:11:08.491226 2777 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 21:11:08.523331 kubelet[2777]: I0116 21:11:08.523083 2777 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 21:11:08.530490 kubelet[2777]: I0116 21:11:08.529859 2777 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 21:11:08.530490 kubelet[2777]: I0116 21:11:08.529916 2777 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 16 21:11:08.530490 kubelet[2777]: I0116 21:11:08.529958 2777 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 16 21:11:08.530490 kubelet[2777]: I0116 21:11:08.529969 2777 kubelet.go:2382] "Starting kubelet main sync loop" Jan 16 21:11:08.530490 kubelet[2777]: E0116 21:11:08.530052 2777 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 21:11:08.614708 kubelet[2777]: I0116 21:11:08.614673 2777 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 16 21:11:08.615760 kubelet[2777]: I0116 21:11:08.614906 2777 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 16 21:11:08.615760 kubelet[2777]: I0116 21:11:08.614941 2777 state_mem.go:36] "Initialized new in-memory state store" Jan 16 21:11:08.615760 kubelet[2777]: I0116 21:11:08.615163 2777 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 16 21:11:08.615760 kubelet[2777]: I0116 21:11:08.615178 2777 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 16 21:11:08.615760 kubelet[2777]: I0116 21:11:08.615203 2777 policy_none.go:49] "None policy: Start" Jan 16 21:11:08.615760 kubelet[2777]: I0116 21:11:08.615217 2777 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 16 21:11:08.615760 kubelet[2777]: I0116 21:11:08.615229 2777 state_mem.go:35] "Initializing new in-memory state store" Jan 16 21:11:08.615760 kubelet[2777]: I0116 21:11:08.615379 2777 state_mem.go:75] "Updated machine memory state" Jan 16 21:11:08.630507 kubelet[2777]: E0116 21:11:08.630460 2777 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 16 21:11:08.632219 kubelet[2777]: I0116 21:11:08.632182 2777 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 21:11:08.633544 kubelet[2777]: I0116 21:11:08.633516 2777 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 21:11:08.633848 kubelet[2777]: I0116 21:11:08.633700 2777 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 21:11:08.636897 kubelet[2777]: I0116 21:11:08.636867 2777 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 21:11:08.644949 kubelet[2777]: E0116 21:11:08.644905 2777 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 16 21:11:08.746320 kubelet[2777]: I0116 21:11:08.746192 2777 kubelet_node_status.go:75] "Attempting to register node" node="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:08.761192 kubelet[2777]: I0116 21:11:08.760206 2777 kubelet_node_status.go:124] "Node was previously registered" node="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:08.761192 kubelet[2777]: I0116 21:11:08.760326 2777 kubelet_node_status.go:78] "Successfully registered node" node="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:08.833079 kubelet[2777]: I0116 21:11:08.833040 2777 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:08.835360 kubelet[2777]: I0116 21:11:08.835125 2777 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:08.836378 kubelet[2777]: I0116 21:11:08.836241 2777 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:08.844639 kubelet[2777]: W0116 21:11:08.844356 2777 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 21:11:08.848113 kubelet[2777]: W0116 21:11:08.848032 2777 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 21:11:08.850941 kubelet[2777]: W0116 21:11:08.850894 2777 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 21:11:08.851883 kubelet[2777]: E0116 21:11:08.851246 2777 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4580.0.0-p-735bf5553b\" already exists" pod="kube-system/kube-controller-manager-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:08.876824 kubelet[2777]: I0116 21:11:08.876776 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57017fd5845344be1aca69aa1a35778b-k8s-certs\") pod \"kube-controller-manager-ci-4580.0.0-p-735bf5553b\" (UID: \"57017fd5845344be1aca69aa1a35778b\") " pod="kube-system/kube-controller-manager-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:08.877161 kubelet[2777]: I0116 21:11:08.877036 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/57017fd5845344be1aca69aa1a35778b-kubeconfig\") pod \"kube-controller-manager-ci-4580.0.0-p-735bf5553b\" (UID: \"57017fd5845344be1aca69aa1a35778b\") " pod="kube-system/kube-controller-manager-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:08.877161 kubelet[2777]: I0116 21:11:08.877102 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57017fd5845344be1aca69aa1a35778b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4580.0.0-p-735bf5553b\" (UID: \"57017fd5845344be1aca69aa1a35778b\") " pod="kube-system/kube-controller-manager-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:08.878217 kubelet[2777]: I0116 21:11:08.877842 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f3dcc1b88e3d26d585f331fead3e2705-ca-certs\") pod \"kube-apiserver-ci-4580.0.0-p-735bf5553b\" (UID: \"f3dcc1b88e3d26d585f331fead3e2705\") " pod="kube-system/kube-apiserver-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:08.878666 kubelet[2777]: I0116 21:11:08.878560 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3dcc1b88e3d26d585f331fead3e2705-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4580.0.0-p-735bf5553b\" (UID: \"f3dcc1b88e3d26d585f331fead3e2705\") " pod="kube-system/kube-apiserver-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:08.878887 kubelet[2777]: I0116 21:11:08.878872 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/57017fd5845344be1aca69aa1a35778b-flexvolume-dir\") pod \"kube-controller-manager-ci-4580.0.0-p-735bf5553b\" (UID: \"57017fd5845344be1aca69aa1a35778b\") " pod="kube-system/kube-controller-manager-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:08.879024 kubelet[2777]: I0116 21:11:08.879013 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5ecd0d08be0c27ef085da38bccd2531-kubeconfig\") pod \"kube-scheduler-ci-4580.0.0-p-735bf5553b\" (UID: \"e5ecd0d08be0c27ef085da38bccd2531\") " pod="kube-system/kube-scheduler-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:08.879113 kubelet[2777]: I0116 21:11:08.879103 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f3dcc1b88e3d26d585f331fead3e2705-k8s-certs\") pod \"kube-apiserver-ci-4580.0.0-p-735bf5553b\" (UID: \"f3dcc1b88e3d26d585f331fead3e2705\") " pod="kube-system/kube-apiserver-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:08.879306 kubelet[2777]: I0116 21:11:08.879225 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57017fd5845344be1aca69aa1a35778b-ca-certs\") pod \"kube-controller-manager-ci-4580.0.0-p-735bf5553b\" (UID: \"57017fd5845344be1aca69aa1a35778b\") " pod="kube-system/kube-controller-manager-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:09.145394 kubelet[2777]: E0116 21:11:09.144930 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:09.149829 kubelet[2777]: E0116 21:11:09.149797 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:09.151887 kubelet[2777]: E0116 21:11:09.151805 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:09.410638 kubelet[2777]: I0116 21:11:09.410010 2777 apiserver.go:52] "Watching apiserver" Jan 16 21:11:09.470419 kubelet[2777]: I0116 21:11:09.470338 2777 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 16 21:11:09.579766 kubelet[2777]: I0116 21:11:09.578647 2777 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:09.580114 kubelet[2777]: I0116 21:11:09.580094 2777 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:09.580495 kubelet[2777]: E0116 21:11:09.580456 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:09.588872 kubelet[2777]: W0116 21:11:09.588774 2777 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 21:11:09.590097 kubelet[2777]: E0116 21:11:09.590023 2777 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4580.0.0-p-735bf5553b\" already exists" pod="kube-system/kube-scheduler-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:09.591132 kubelet[2777]: E0116 21:11:09.591064 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:09.592052 kubelet[2777]: W0116 21:11:09.591988 2777 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 21:11:09.592302 kubelet[2777]: E0116 21:11:09.592236 2777 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4580.0.0-p-735bf5553b\" already exists" pod="kube-system/kube-apiserver-ci-4580.0.0-p-735bf5553b" Jan 16 21:11:09.592641 kubelet[2777]: E0116 21:11:09.592626 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:09.623021 kubelet[2777]: I0116 21:11:09.622687 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4580.0.0-p-735bf5553b" podStartSLOduration=1.622662312 podStartE2EDuration="1.622662312s" podCreationTimestamp="2026-01-16 21:11:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 21:11:09.622575449 +0000 UTC m=+1.353546959" watchObservedRunningTime="2026-01-16 21:11:09.622662312 +0000 UTC m=+1.353633822" Jan 16 21:11:09.623248 kubelet[2777]: I0116 21:11:09.623159 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4580.0.0-p-735bf5553b" podStartSLOduration=1.623141418 podStartE2EDuration="1.623141418s" podCreationTimestamp="2026-01-16 21:11:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 21:11:09.611981453 +0000 UTC m=+1.342952960" watchObservedRunningTime="2026-01-16 21:11:09.623141418 +0000 UTC m=+1.354112931" Jan 16 21:11:09.635159 kubelet[2777]: I0116 21:11:09.635063 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4580.0.0-p-735bf5553b" podStartSLOduration=4.635037623 podStartE2EDuration="4.635037623s" podCreationTimestamp="2026-01-16 21:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 21:11:09.634402095 +0000 UTC m=+1.365373623" watchObservedRunningTime="2026-01-16 21:11:09.635037623 +0000 UTC m=+1.366009138" Jan 16 21:11:10.581316 kubelet[2777]: E0116 21:11:10.581258 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:10.581316 kubelet[2777]: E0116 21:11:10.581205 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:11.586203 kubelet[2777]: E0116 21:11:11.585265 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:11.586203 kubelet[2777]: E0116 21:11:11.585505 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:13.396960 kubelet[2777]: E0116 21:11:13.396110 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:13.592386 kubelet[2777]: E0116 21:11:13.592288 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:13.734381 kubelet[2777]: I0116 21:11:13.734307 2777 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 16 21:11:13.735885 containerd[1584]: time="2026-01-16T21:11:13.734896337Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 16 21:11:13.736480 kubelet[2777]: I0116 21:11:13.735187 2777 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 16 21:11:14.510269 systemd[1]: Created slice kubepods-besteffort-pod733cfffc_9f37_4dbc_b941_4da1aa5db4a6.slice - libcontainer container kubepods-besteffort-pod733cfffc_9f37_4dbc_b941_4da1aa5db4a6.slice. Jan 16 21:11:14.604694 kubelet[2777]: E0116 21:11:14.604632 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:14.622789 kubelet[2777]: I0116 21:11:14.622662 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/733cfffc-9f37-4dbc-b941-4da1aa5db4a6-xtables-lock\") pod \"kube-proxy-fr65b\" (UID: \"733cfffc-9f37-4dbc-b941-4da1aa5db4a6\") " pod="kube-system/kube-proxy-fr65b" Jan 16 21:11:14.622789 kubelet[2777]: I0116 21:11:14.622790 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rtrp\" (UniqueName: \"kubernetes.io/projected/733cfffc-9f37-4dbc-b941-4da1aa5db4a6-kube-api-access-4rtrp\") pod \"kube-proxy-fr65b\" (UID: \"733cfffc-9f37-4dbc-b941-4da1aa5db4a6\") " pod="kube-system/kube-proxy-fr65b" Jan 16 21:11:14.623041 kubelet[2777]: I0116 21:11:14.622835 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/733cfffc-9f37-4dbc-b941-4da1aa5db4a6-kube-proxy\") pod \"kube-proxy-fr65b\" (UID: \"733cfffc-9f37-4dbc-b941-4da1aa5db4a6\") " pod="kube-system/kube-proxy-fr65b" Jan 16 21:11:14.623041 kubelet[2777]: I0116 21:11:14.622864 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/733cfffc-9f37-4dbc-b941-4da1aa5db4a6-lib-modules\") pod \"kube-proxy-fr65b\" (UID: \"733cfffc-9f37-4dbc-b941-4da1aa5db4a6\") " pod="kube-system/kube-proxy-fr65b" Jan 16 21:11:14.689697 update_engine[1562]: I20260116 21:11:14.689580 1562 update_attempter.cc:509] Updating boot flags... Jan 16 21:11:14.836930 kubelet[2777]: E0116 21:11:14.836324 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:14.850099 containerd[1584]: time="2026-01-16T21:11:14.850027246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fr65b,Uid:733cfffc-9f37-4dbc-b941-4da1aa5db4a6,Namespace:kube-system,Attempt:0,}" Jan 16 21:11:14.988143 containerd[1584]: time="2026-01-16T21:11:14.986451109Z" level=info msg="connecting to shim 4a74b0f44630ad3f6541dbf7116b5f1d08c9b7a126cc943d9ac3eef30946ee65" address="unix:///run/containerd/s/23b3d60eb6b6d2011a16f8e83e6fc313259235997e81b47b83e55fa5dfbc737e" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:11:15.050421 systemd[1]: Created slice kubepods-besteffort-pod304fda91_1124_4e44_9a03_3e1e1b68a36a.slice - libcontainer container kubepods-besteffort-pod304fda91_1124_4e44_9a03_3e1e1b68a36a.slice. Jan 16 21:11:15.131180 kubelet[2777]: I0116 21:11:15.129633 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46vtz\" (UniqueName: \"kubernetes.io/projected/304fda91-1124-4e44-9a03-3e1e1b68a36a-kube-api-access-46vtz\") pod \"tigera-operator-7dcd859c48-fpbmg\" (UID: \"304fda91-1124-4e44-9a03-3e1e1b68a36a\") " pod="tigera-operator/tigera-operator-7dcd859c48-fpbmg" Jan 16 21:11:15.132066 kubelet[2777]: I0116 21:11:15.131675 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/304fda91-1124-4e44-9a03-3e1e1b68a36a-var-lib-calico\") pod \"tigera-operator-7dcd859c48-fpbmg\" (UID: \"304fda91-1124-4e44-9a03-3e1e1b68a36a\") " pod="tigera-operator/tigera-operator-7dcd859c48-fpbmg" Jan 16 21:11:15.227360 systemd[1]: Started cri-containerd-4a74b0f44630ad3f6541dbf7116b5f1d08c9b7a126cc943d9ac3eef30946ee65.scope - libcontainer container 4a74b0f44630ad3f6541dbf7116b5f1d08c9b7a126cc943d9ac3eef30946ee65. Jan 16 21:11:15.260000 audit: BPF prog-id=131 op=LOAD Jan 16 21:11:15.262544 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 16 21:11:15.262666 kernel: audit: type=1334 audit(1768597875.260:433): prog-id=131 op=LOAD Jan 16 21:11:15.263000 audit: BPF prog-id=132 op=LOAD Jan 16 21:11:15.265002 kernel: audit: type=1334 audit(1768597875.263:434): prog-id=132 op=LOAD Jan 16 21:11:15.263000 audit[2858]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2846 pid=2858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.273796 kernel: audit: type=1300 audit(1768597875.263:434): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2846 pid=2858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.263000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461373462306634343633306164336636353431646266373131366235 Jan 16 21:11:15.279840 kernel: audit: type=1327 audit(1768597875.263:434): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461373462306634343633306164336636353431646266373131366235 Jan 16 21:11:15.280390 kernel: audit: type=1334 audit(1768597875.263:435): prog-id=132 op=UNLOAD Jan 16 21:11:15.263000 audit: BPF prog-id=132 op=UNLOAD Jan 16 21:11:15.263000 audit[2858]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2846 pid=2858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.289787 kernel: audit: type=1300 audit(1768597875.263:435): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2846 pid=2858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.263000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461373462306634343633306164336636353431646266373131366235 Jan 16 21:11:15.263000 audit: BPF prog-id=133 op=LOAD Jan 16 21:11:15.298700 kernel: audit: type=1327 audit(1768597875.263:435): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461373462306634343633306164336636353431646266373131366235 Jan 16 21:11:15.298841 kernel: audit: type=1334 audit(1768597875.263:436): prog-id=133 op=LOAD Jan 16 21:11:15.298863 kernel: audit: type=1300 audit(1768597875.263:436): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2846 pid=2858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.263000 audit[2858]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2846 pid=2858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.263000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461373462306634343633306164336636353431646266373131366235 Jan 16 21:11:15.309450 kernel: audit: type=1327 audit(1768597875.263:436): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461373462306634343633306164336636353431646266373131366235 Jan 16 21:11:15.263000 audit: BPF prog-id=134 op=LOAD Jan 16 21:11:15.263000 audit[2858]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2846 pid=2858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.263000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461373462306634343633306164336636353431646266373131366235 Jan 16 21:11:15.264000 audit: BPF prog-id=134 op=UNLOAD Jan 16 21:11:15.264000 audit[2858]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2846 pid=2858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461373462306634343633306164336636353431646266373131366235 Jan 16 21:11:15.264000 audit: BPF prog-id=133 op=UNLOAD Jan 16 21:11:15.264000 audit[2858]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2846 pid=2858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461373462306634343633306164336636353431646266373131366235 Jan 16 21:11:15.264000 audit: BPF prog-id=135 op=LOAD Jan 16 21:11:15.264000 audit[2858]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2846 pid=2858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461373462306634343633306164336636353431646266373131366235 Jan 16 21:11:15.319008 containerd[1584]: time="2026-01-16T21:11:15.318932473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fr65b,Uid:733cfffc-9f37-4dbc-b941-4da1aa5db4a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a74b0f44630ad3f6541dbf7116b5f1d08c9b7a126cc943d9ac3eef30946ee65\"" Jan 16 21:11:15.320901 kubelet[2777]: E0116 21:11:15.320720 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:15.325798 containerd[1584]: time="2026-01-16T21:11:15.325684168Z" level=info msg="CreateContainer within sandbox \"4a74b0f44630ad3f6541dbf7116b5f1d08c9b7a126cc943d9ac3eef30946ee65\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 16 21:11:15.353774 containerd[1584]: time="2026-01-16T21:11:15.352176950Z" level=info msg="Container f7cf05d137dc828cc1eacf09adf3ca60059b3e869d2e9a08cf4754d785cca657: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:11:15.362195 containerd[1584]: time="2026-01-16T21:11:15.362111511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-fpbmg,Uid:304fda91-1124-4e44-9a03-3e1e1b68a36a,Namespace:tigera-operator,Attempt:0,}" Jan 16 21:11:15.373582 containerd[1584]: time="2026-01-16T21:11:15.373493055Z" level=info msg="CreateContainer within sandbox \"4a74b0f44630ad3f6541dbf7116b5f1d08c9b7a126cc943d9ac3eef30946ee65\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f7cf05d137dc828cc1eacf09adf3ca60059b3e869d2e9a08cf4754d785cca657\"" Jan 16 21:11:15.376530 containerd[1584]: time="2026-01-16T21:11:15.376458128Z" level=info msg="StartContainer for \"f7cf05d137dc828cc1eacf09adf3ca60059b3e869d2e9a08cf4754d785cca657\"" Jan 16 21:11:15.380396 containerd[1584]: time="2026-01-16T21:11:15.380321412Z" level=info msg="connecting to shim f7cf05d137dc828cc1eacf09adf3ca60059b3e869d2e9a08cf4754d785cca657" address="unix:///run/containerd/s/23b3d60eb6b6d2011a16f8e83e6fc313259235997e81b47b83e55fa5dfbc737e" protocol=ttrpc version=3 Jan 16 21:11:15.417362 systemd[1]: Started cri-containerd-f7cf05d137dc828cc1eacf09adf3ca60059b3e869d2e9a08cf4754d785cca657.scope - libcontainer container f7cf05d137dc828cc1eacf09adf3ca60059b3e869d2e9a08cf4754d785cca657. Jan 16 21:11:15.423095 containerd[1584]: time="2026-01-16T21:11:15.423031835Z" level=info msg="connecting to shim 3954e7fb9c8330208a9e62d009668e228c1b667662e7b66dc07358263cc66771" address="unix:///run/containerd/s/9fbda86ebea7bfe0730c45c0773a0b4ac4fe8c20c4f88d44e26014fa1f179b58" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:11:15.479476 systemd[1]: Started cri-containerd-3954e7fb9c8330208a9e62d009668e228c1b667662e7b66dc07358263cc66771.scope - libcontainer container 3954e7fb9c8330208a9e62d009668e228c1b667662e7b66dc07358263cc66771. Jan 16 21:11:15.499000 audit: BPF prog-id=136 op=LOAD Jan 16 21:11:15.499000 audit[2886]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2846 pid=2886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637636630356431333764633832386363316561636630396164663363 Jan 16 21:11:15.499000 audit: BPF prog-id=137 op=LOAD Jan 16 21:11:15.499000 audit[2886]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2846 pid=2886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637636630356431333764633832386363316561636630396164663363 Jan 16 21:11:15.499000 audit: BPF prog-id=137 op=UNLOAD Jan 16 21:11:15.499000 audit[2886]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2846 pid=2886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637636630356431333764633832386363316561636630396164663363 Jan 16 21:11:15.499000 audit: BPF prog-id=136 op=UNLOAD Jan 16 21:11:15.499000 audit[2886]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2846 pid=2886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637636630356431333764633832386363316561636630396164663363 Jan 16 21:11:15.499000 audit: BPF prog-id=138 op=LOAD Jan 16 21:11:15.499000 audit[2886]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2846 pid=2886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637636630356431333764633832386363316561636630396164663363 Jan 16 21:11:15.511000 audit: BPF prog-id=139 op=LOAD Jan 16 21:11:15.515000 audit: BPF prog-id=140 op=LOAD Jan 16 21:11:15.515000 audit[2926]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=2907 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339353465376662396338333330323038613965363264303039363638 Jan 16 21:11:15.515000 audit: BPF prog-id=140 op=UNLOAD Jan 16 21:11:15.515000 audit[2926]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2907 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339353465376662396338333330323038613965363264303039363638 Jan 16 21:11:15.518000 audit: BPF prog-id=141 op=LOAD Jan 16 21:11:15.518000 audit[2926]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=2907 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339353465376662396338333330323038613965363264303039363638 Jan 16 21:11:15.518000 audit: BPF prog-id=142 op=LOAD Jan 16 21:11:15.518000 audit[2926]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=2907 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339353465376662396338333330323038613965363264303039363638 Jan 16 21:11:15.519000 audit: BPF prog-id=142 op=UNLOAD Jan 16 21:11:15.519000 audit[2926]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2907 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339353465376662396338333330323038613965363264303039363638 Jan 16 21:11:15.519000 audit: BPF prog-id=141 op=UNLOAD Jan 16 21:11:15.519000 audit[2926]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2907 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339353465376662396338333330323038613965363264303039363638 Jan 16 21:11:15.519000 audit: BPF prog-id=143 op=LOAD Jan 16 21:11:15.519000 audit[2926]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=2907 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339353465376662396338333330323038613965363264303039363638 Jan 16 21:11:15.564476 containerd[1584]: time="2026-01-16T21:11:15.564312545Z" level=info msg="StartContainer for \"f7cf05d137dc828cc1eacf09adf3ca60059b3e869d2e9a08cf4754d785cca657\" returns successfully" Jan 16 21:11:15.619933 containerd[1584]: time="2026-01-16T21:11:15.619228030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-fpbmg,Uid:304fda91-1124-4e44-9a03-3e1e1b68a36a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3954e7fb9c8330208a9e62d009668e228c1b667662e7b66dc07358263cc66771\"" Jan 16 21:11:15.620102 kubelet[2777]: E0116 21:11:15.619504 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:15.628066 containerd[1584]: time="2026-01-16T21:11:15.627111745Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 16 21:11:15.633601 systemd-resolved[1263]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jan 16 21:11:15.645686 kubelet[2777]: I0116 21:11:15.645617 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fr65b" podStartSLOduration=1.6454096790000001 podStartE2EDuration="1.645409679s" podCreationTimestamp="2026-01-16 21:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 21:11:15.64422031 +0000 UTC m=+7.375191814" watchObservedRunningTime="2026-01-16 21:11:15.645409679 +0000 UTC m=+7.376381190" Jan 16 21:11:15.741788 kubelet[2777]: E0116 21:11:15.741187 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:15.777697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3441665972.mount: Deactivated successfully. Jan 16 21:11:15.893000 audit[2994]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=2994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:15.893000 audit[2994]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd202ddcf0 a2=0 a3=7ffd202ddcdc items=0 ppid=2909 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.893000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 16 21:11:15.895000 audit[2995]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=2995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:15.895000 audit[2995]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffdb6f9c60 a2=0 a3=7fffdb6f9c4c items=0 ppid=2909 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.895000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 16 21:11:15.904000 audit[2996]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=2996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:15.904000 audit[2996]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff92db2ab0 a2=0 a3=7fff92db2a9c items=0 ppid=2909 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.904000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 16 21:11:15.906000 audit[2997]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=2997 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:15.906000 audit[2997]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe742f8520 a2=0 a3=7ffe742f850c items=0 ppid=2909 pid=2997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.906000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 16 21:11:15.909000 audit[2998]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2998 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:15.909000 audit[2998]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4ba57e90 a2=0 a3=7ffd4ba57e7c items=0 ppid=2909 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.909000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 16 21:11:15.911000 audit[2999]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=2999 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:15.911000 audit[2999]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd0d26cf30 a2=0 a3=7ffd0d26cf1c items=0 ppid=2909 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:15.911000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 16 21:11:16.000000 audit[3000]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3000 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:16.000000 audit[3000]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd8d05a0b0 a2=0 a3=7ffd8d05a09c items=0 ppid=2909 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.000000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 16 21:11:16.008000 audit[3002]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3002 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:16.008000 audit[3002]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe62d96dd0 a2=0 a3=7ffe62d96dbc items=0 ppid=2909 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.008000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 16 21:11:16.017000 audit[3005]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:16.017000 audit[3005]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff73f07d50 a2=0 a3=7fff73f07d3c items=0 ppid=2909 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.017000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 16 21:11:16.020000 audit[3006]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3006 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:16.020000 audit[3006]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdeeddd790 a2=0 a3=7ffdeeddd77c items=0 ppid=2909 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.020000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 16 21:11:16.025000 audit[3008]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3008 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:16.025000 audit[3008]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcf8614b90 a2=0 a3=7ffcf8614b7c items=0 ppid=2909 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.025000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 16 21:11:16.028000 audit[3009]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:16.028000 audit[3009]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe71d6cdd0 a2=0 a3=7ffe71d6cdbc items=0 ppid=2909 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.028000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 16 21:11:16.033000 audit[3011]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:16.033000 audit[3011]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffed5722610 a2=0 a3=7ffed57225fc items=0 ppid=2909 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.033000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 16 21:11:16.042000 audit[3014]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3014 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:16.042000 audit[3014]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcba1d0e80 a2=0 a3=7ffcba1d0e6c items=0 ppid=2909 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.042000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 16 21:11:16.045000 audit[3015]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:16.045000 audit[3015]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6d2e63a0 a2=0 a3=7ffe6d2e638c items=0 ppid=2909 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.045000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 16 21:11:16.050000 audit[3017]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:16.050000 audit[3017]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff47ba9c60 a2=0 a3=7fff47ba9c4c items=0 ppid=2909 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.050000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 16 21:11:16.052000 audit[3018]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3018 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:16.052000 audit[3018]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffce9991350 a2=0 a3=7ffce999133c items=0 ppid=2909 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.052000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 16 21:11:16.057000 audit[3020]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:16.057000 audit[3020]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe71ba4060 a2=0 a3=7ffe71ba404c items=0 ppid=2909 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.057000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 16 21:11:16.064000 audit[3023]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:16.064000 audit[3023]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff3ee92290 a2=0 a3=7fff3ee9227c items=0 ppid=2909 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.064000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 16 21:11:16.072000 audit[3026]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:16.072000 audit[3026]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffefbb626c0 a2=0 a3=7ffefbb626ac items=0 ppid=2909 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.072000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 16 21:11:16.074000 audit[3027]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:16.074000 audit[3027]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd4aa266e0 a2=0 a3=7ffd4aa266cc items=0 ppid=2909 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.074000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 16 21:11:16.080000 audit[3029]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:16.080000 audit[3029]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffcb68f240 a2=0 a3=7fffcb68f22c items=0 ppid=2909 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.080000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 21:11:16.089000 audit[3032]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:16.089000 audit[3032]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdeecf18a0 a2=0 a3=7ffdeecf188c items=0 ppid=2909 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.089000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 21:11:16.092000 audit[3033]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:16.092000 audit[3033]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffccea8f250 a2=0 a3=7ffccea8f23c items=0 ppid=2909 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.092000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 16 21:11:16.097000 audit[3035]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:11:16.097000 audit[3035]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc7ffc0730 a2=0 a3=7ffc7ffc071c items=0 ppid=2909 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.097000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 16 21:11:16.132000 audit[3041]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3041 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:11:16.132000 audit[3041]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdbfca7a00 a2=0 a3=7ffdbfca79ec items=0 ppid=2909 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.132000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:16.142000 audit[3041]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3041 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:11:16.142000 audit[3041]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffdbfca7a00 a2=0 a3=7ffdbfca79ec items=0 ppid=2909 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.142000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:16.146000 audit[3046]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3046 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:16.146000 audit[3046]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe97237e50 a2=0 a3=7ffe97237e3c items=0 ppid=2909 pid=3046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.146000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 16 21:11:16.151000 audit[3048]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3048 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:16.151000 audit[3048]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd9a9d43d0 a2=0 a3=7ffd9a9d43bc items=0 ppid=2909 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.151000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 16 21:11:16.161000 audit[3051]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3051 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:16.161000 audit[3051]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffc9effe20 a2=0 a3=7fffc9effe0c items=0 ppid=2909 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.161000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 16 21:11:16.164000 audit[3052]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3052 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:16.164000 audit[3052]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6a5463c0 a2=0 a3=7ffc6a5463ac items=0 ppid=2909 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.164000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 16 21:11:16.169000 audit[3054]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3054 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:16.169000 audit[3054]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdf3ffaa80 a2=0 a3=7ffdf3ffaa6c items=0 ppid=2909 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.169000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 16 21:11:16.172000 audit[3055]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3055 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:16.172000 audit[3055]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd17872df0 a2=0 a3=7ffd17872ddc items=0 ppid=2909 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.172000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 16 21:11:16.178000 audit[3057]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3057 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:16.178000 audit[3057]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffef168d850 a2=0 a3=7ffef168d83c items=0 ppid=2909 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.178000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 16 21:11:16.189000 audit[3060]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3060 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:16.189000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc4eb4b0a0 a2=0 a3=7ffc4eb4b08c items=0 ppid=2909 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.189000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 16 21:11:16.192000 audit[3061]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3061 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:16.192000 audit[3061]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef8772410 a2=0 a3=7ffef87723fc items=0 ppid=2909 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.192000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 16 21:11:16.198000 audit[3063]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3063 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:16.198000 audit[3063]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff6b5c8640 a2=0 a3=7fff6b5c862c items=0 ppid=2909 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.198000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 16 21:11:16.200000 audit[3064]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3064 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:16.200000 audit[3064]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffde905e70 a2=0 a3=7fffde905e5c items=0 ppid=2909 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.200000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 16 21:11:16.205000 audit[3066]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3066 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:16.205000 audit[3066]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcfc679330 a2=0 a3=7ffcfc67931c items=0 ppid=2909 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.205000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 16 21:11:16.211000 audit[3069]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3069 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:16.211000 audit[3069]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcf92dd610 a2=0 a3=7ffcf92dd5fc items=0 ppid=2909 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.211000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 16 21:11:16.216000 audit[3072]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:16.216000 audit[3072]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff05771190 a2=0 a3=7fff0577117c items=0 ppid=2909 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.216000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 16 21:11:16.218000 audit[3073]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:16.218000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd537ad570 a2=0 a3=7ffd537ad55c items=0 ppid=2909 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.218000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 16 21:11:16.222000 audit[3075]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3075 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:16.222000 audit[3075]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe5e94eda0 a2=0 a3=7ffe5e94ed8c items=0 ppid=2909 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.222000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 21:11:16.230000 audit[3078]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:16.230000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd393e3620 a2=0 a3=7ffd393e360c items=0 ppid=2909 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.230000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 21:11:16.232000 audit[3079]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:16.232000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd17eda5d0 a2=0 a3=7ffd17eda5bc items=0 ppid=2909 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.232000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 16 21:11:16.237000 audit[3081]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:16.237000 audit[3081]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc85a1ae30 a2=0 a3=7ffc85a1ae1c items=0 ppid=2909 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.237000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 16 21:11:16.239000 audit[3082]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:16.239000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc09eb4910 a2=0 a3=7ffc09eb48fc items=0 ppid=2909 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.239000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 16 21:11:16.244000 audit[3084]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:16.244000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd40040f30 a2=0 a3=7ffd40040f1c items=0 ppid=2909 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.244000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 21:11:16.252000 audit[3087]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:11:16.252000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffeed0a99f0 a2=0 a3=7ffeed0a99dc items=0 ppid=2909 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.252000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 21:11:16.259000 audit[3089]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 16 21:11:16.259000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fffe1225070 a2=0 a3=7fffe122505c items=0 ppid=2909 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.259000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:16.260000 audit[3089]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 16 21:11:16.260000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fffe1225070 a2=0 a3=7fffe122505c items=0 ppid=2909 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:16.260000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:16.622765 kubelet[2777]: E0116 21:11:16.622021 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:17.500295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4073625346.mount: Deactivated successfully. Jan 16 21:11:19.152777 containerd[1584]: time="2026-01-16T21:11:19.151977255Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:11:19.154634 containerd[1584]: time="2026-01-16T21:11:19.154124575Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25052948" Jan 16 21:11:19.159064 containerd[1584]: time="2026-01-16T21:11:19.158851264Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:11:19.167915 containerd[1584]: time="2026-01-16T21:11:19.167792891Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:11:19.170617 containerd[1584]: time="2026-01-16T21:11:19.170565974Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.543401129s" Jan 16 21:11:19.170617 containerd[1584]: time="2026-01-16T21:11:19.170610068Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 16 21:11:19.174631 containerd[1584]: time="2026-01-16T21:11:19.174582761Z" level=info msg="CreateContainer within sandbox \"3954e7fb9c8330208a9e62d009668e228c1b667662e7b66dc07358263cc66771\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 16 21:11:19.190772 containerd[1584]: time="2026-01-16T21:11:19.188330484Z" level=info msg="Container 6aaea28647dc2a0220b92c7ea7b0b909a3dd9cc7f04deb93cfe5210a7b031178: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:11:19.203147 containerd[1584]: time="2026-01-16T21:11:19.203088590Z" level=info msg="CreateContainer within sandbox \"3954e7fb9c8330208a9e62d009668e228c1b667662e7b66dc07358263cc66771\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6aaea28647dc2a0220b92c7ea7b0b909a3dd9cc7f04deb93cfe5210a7b031178\"" Jan 16 21:11:19.204300 containerd[1584]: time="2026-01-16T21:11:19.204018550Z" level=info msg="StartContainer for \"6aaea28647dc2a0220b92c7ea7b0b909a3dd9cc7f04deb93cfe5210a7b031178\"" Jan 16 21:11:19.205701 containerd[1584]: time="2026-01-16T21:11:19.205660982Z" level=info msg="connecting to shim 6aaea28647dc2a0220b92c7ea7b0b909a3dd9cc7f04deb93cfe5210a7b031178" address="unix:///run/containerd/s/9fbda86ebea7bfe0730c45c0773a0b4ac4fe8c20c4f88d44e26014fa1f179b58" protocol=ttrpc version=3 Jan 16 21:11:19.239075 systemd[1]: Started cri-containerd-6aaea28647dc2a0220b92c7ea7b0b909a3dd9cc7f04deb93cfe5210a7b031178.scope - libcontainer container 6aaea28647dc2a0220b92c7ea7b0b909a3dd9cc7f04deb93cfe5210a7b031178. Jan 16 21:11:19.255000 audit: BPF prog-id=144 op=LOAD Jan 16 21:11:19.256000 audit: BPF prog-id=145 op=LOAD Jan 16 21:11:19.256000 audit[3099]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2907 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:19.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661616561323836343764633261303232306239326337656137623062 Jan 16 21:11:19.256000 audit: BPF prog-id=145 op=UNLOAD Jan 16 21:11:19.256000 audit[3099]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2907 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:19.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661616561323836343764633261303232306239326337656137623062 Jan 16 21:11:19.257000 audit: BPF prog-id=146 op=LOAD Jan 16 21:11:19.257000 audit[3099]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2907 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:19.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661616561323836343764633261303232306239326337656137623062 Jan 16 21:11:19.257000 audit: BPF prog-id=147 op=LOAD Jan 16 21:11:19.257000 audit[3099]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2907 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:19.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661616561323836343764633261303232306239326337656137623062 Jan 16 21:11:19.257000 audit: BPF prog-id=147 op=UNLOAD Jan 16 21:11:19.257000 audit[3099]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2907 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:19.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661616561323836343764633261303232306239326337656137623062 Jan 16 21:11:19.257000 audit: BPF prog-id=146 op=UNLOAD Jan 16 21:11:19.257000 audit[3099]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2907 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:19.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661616561323836343764633261303232306239326337656137623062 Jan 16 21:11:19.258000 audit: BPF prog-id=148 op=LOAD Jan 16 21:11:19.258000 audit[3099]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2907 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:19.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661616561323836343764633261303232306239326337656137623062 Jan 16 21:11:19.285764 containerd[1584]: time="2026-01-16T21:11:19.285430709Z" level=info msg="StartContainer for \"6aaea28647dc2a0220b92c7ea7b0b909a3dd9cc7f04deb93cfe5210a7b031178\" returns successfully" Jan 16 21:11:19.938865 kubelet[2777]: E0116 21:11:19.938505 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:19.954165 kubelet[2777]: I0116 21:11:19.954027 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-fpbmg" podStartSLOduration=2.408203844 podStartE2EDuration="5.953999483s" podCreationTimestamp="2026-01-16 21:11:14 +0000 UTC" firstStartedPulling="2026-01-16 21:11:15.625757209 +0000 UTC m=+7.356728695" lastFinishedPulling="2026-01-16 21:11:19.171552835 +0000 UTC m=+10.902524334" observedRunningTime="2026-01-16 21:11:19.648022379 +0000 UTC m=+11.378993895" watchObservedRunningTime="2026-01-16 21:11:19.953999483 +0000 UTC m=+11.684970992" Jan 16 21:11:20.633645 kubelet[2777]: E0116 21:11:20.633603 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:26.274804 sudo[1830]: pam_unix(sudo:session): session closed for user root Jan 16 21:11:26.274000 audit[1830]: USER_END pid=1830 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:11:26.276048 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 16 21:11:26.276156 kernel: audit: type=1106 audit(1768597886.274:513): pid=1830 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:11:26.274000 audit[1830]: CRED_DISP pid=1830 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:11:26.292489 kernel: audit: type=1104 audit(1768597886.274:514): pid=1830 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:11:26.350058 sshd[1829]: Connection closed by 68.220.241.50 port 50134 Jan 16 21:11:26.352068 sshd-session[1825]: pam_unix(sshd:session): session closed for user core Jan 16 21:11:26.355000 audit[1825]: USER_END pid=1825 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:11:26.359527 systemd-logind[1560]: Session 8 logged out. Waiting for processes to exit. Jan 16 21:11:26.364216 kernel: audit: type=1106 audit(1768597886.355:515): pid=1825 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:11:26.362345 systemd[1]: sshd@6-137.184.190.135:22-68.220.241.50:50134.service: Deactivated successfully. Jan 16 21:11:26.370088 kernel: audit: type=1104 audit(1768597886.356:516): pid=1825 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:11:26.356000 audit[1825]: CRED_DISP pid=1825 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:11:26.366256 systemd[1]: session-8.scope: Deactivated successfully. Jan 16 21:11:26.367090 systemd[1]: session-8.scope: Consumed 6.432s CPU time, 158M memory peak. Jan 16 21:11:26.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-137.184.190.135:22-68.220.241.50:50134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:11:26.372112 systemd-logind[1560]: Removed session 8. Jan 16 21:11:26.375787 kernel: audit: type=1131 audit(1768597886.358:517): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-137.184.190.135:22-68.220.241.50:50134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:11:27.351000 audit[3181]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:11:27.356787 kernel: audit: type=1325 audit(1768597887.351:518): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:11:27.351000 audit[3181]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff64e53960 a2=0 a3=7fff64e5394c items=0 ppid=2909 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:27.364919 kernel: audit: type=1300 audit(1768597887.351:518): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff64e53960 a2=0 a3=7fff64e5394c items=0 ppid=2909 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:27.351000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:27.371786 kernel: audit: type=1327 audit(1768597887.351:518): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:27.359000 audit[3181]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:11:27.376809 kernel: audit: type=1325 audit(1768597887.359:519): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:11:27.359000 audit[3181]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff64e53960 a2=0 a3=0 items=0 ppid=2909 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:27.385381 kernel: audit: type=1300 audit(1768597887.359:519): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff64e53960 a2=0 a3=0 items=0 ppid=2909 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:27.359000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:27.626000 audit[3183]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:11:27.626000 audit[3183]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc48731930 a2=0 a3=7ffc4873191c items=0 ppid=2909 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:27.626000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:27.631000 audit[3183]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:11:27.631000 audit[3183]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc48731930 a2=0 a3=0 items=0 ppid=2909 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:27.631000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:31.181000 audit[3186]: NETFILTER_CFG table=filter:109 family=2 entries=16 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:11:31.181000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe65d12200 a2=0 a3=7ffe65d121ec items=0 ppid=2909 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:31.181000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:31.193000 audit[3186]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:11:31.193000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe65d12200 a2=0 a3=0 items=0 ppid=2909 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:31.193000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:31.260000 audit[3188]: NETFILTER_CFG table=filter:111 family=2 entries=17 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:11:31.260000 audit[3188]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff21e7a9c0 a2=0 a3=7fff21e7a9ac items=0 ppid=2909 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:31.260000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:31.266000 audit[3188]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:11:31.266000 audit[3188]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff21e7a9c0 a2=0 a3=0 items=0 ppid=2909 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:31.296288 kernel: kauditd_printk_skb: 17 callbacks suppressed Jan 16 21:11:31.296399 kernel: audit: type=1300 audit(1768597891.266:525): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff21e7a9c0 a2=0 a3=0 items=0 ppid=2909 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:31.266000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:31.308795 kernel: audit: type=1327 audit(1768597891.266:525): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:32.322000 audit[3190]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3190 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:11:32.328911 kernel: audit: type=1325 audit(1768597892.322:526): table=filter:113 family=2 entries=19 op=nft_register_rule pid=3190 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:11:32.329073 kernel: audit: type=1300 audit(1768597892.322:526): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff21ccb510 a2=0 a3=7fff21ccb4fc items=0 ppid=2909 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:32.322000 audit[3190]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff21ccb510 a2=0 a3=7fff21ccb4fc items=0 ppid=2909 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:32.322000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:32.340792 kernel: audit: type=1327 audit(1768597892.322:526): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:32.335000 audit[3190]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3190 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:11:32.345775 kernel: audit: type=1325 audit(1768597892.335:527): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3190 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:11:32.335000 audit[3190]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff21ccb510 a2=0 a3=0 items=0 ppid=2909 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:32.352808 kernel: audit: type=1300 audit(1768597892.335:527): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff21ccb510 a2=0 a3=0 items=0 ppid=2909 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:32.335000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:32.361829 kernel: audit: type=1327 audit(1768597892.335:527): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:33.510000 audit[3192]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3192 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:11:33.516777 kernel: audit: type=1325 audit(1768597893.510:528): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3192 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:11:33.510000 audit[3192]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc5b8eda80 a2=0 a3=7ffc5b8eda6c items=0 ppid=2909 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:33.525813 kernel: audit: type=1300 audit(1768597893.510:528): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc5b8eda80 a2=0 a3=7ffc5b8eda6c items=0 ppid=2909 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:33.510000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:33.532000 audit[3192]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3192 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:11:33.532000 audit[3192]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc5b8eda80 a2=0 a3=0 items=0 ppid=2909 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:33.532000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:33.565026 systemd[1]: Created slice kubepods-besteffort-pod3ad772a5_9658_49b7_97bb_f4e27fee78fc.slice - libcontainer container kubepods-besteffort-pod3ad772a5_9658_49b7_97bb_f4e27fee78fc.slice. Jan 16 21:11:33.658518 kubelet[2777]: I0116 21:11:33.658424 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ad772a5-9658-49b7-97bb-f4e27fee78fc-tigera-ca-bundle\") pod \"calico-typha-586699f554-kr82c\" (UID: \"3ad772a5-9658-49b7-97bb-f4e27fee78fc\") " pod="calico-system/calico-typha-586699f554-kr82c" Jan 16 21:11:33.659257 kubelet[2777]: I0116 21:11:33.658956 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqxwj\" (UniqueName: \"kubernetes.io/projected/3ad772a5-9658-49b7-97bb-f4e27fee78fc-kube-api-access-qqxwj\") pod \"calico-typha-586699f554-kr82c\" (UID: \"3ad772a5-9658-49b7-97bb-f4e27fee78fc\") " pod="calico-system/calico-typha-586699f554-kr82c" Jan 16 21:11:33.659257 kubelet[2777]: I0116 21:11:33.658996 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3ad772a5-9658-49b7-97bb-f4e27fee78fc-typha-certs\") pod \"calico-typha-586699f554-kr82c\" (UID: \"3ad772a5-9658-49b7-97bb-f4e27fee78fc\") " pod="calico-system/calico-typha-586699f554-kr82c" Jan 16 21:11:33.740091 systemd[1]: Created slice kubepods-besteffort-pod78a6b129_9463_4511_88e8_6577e211c6bb.slice - libcontainer container kubepods-besteffort-pod78a6b129_9463_4511_88e8_6577e211c6bb.slice. Jan 16 21:11:33.760086 kubelet[2777]: I0116 21:11:33.760023 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/78a6b129-9463-4511-88e8-6577e211c6bb-flexvol-driver-host\") pod \"calico-node-wctp9\" (UID: \"78a6b129-9463-4511-88e8-6577e211c6bb\") " pod="calico-system/calico-node-wctp9" Jan 16 21:11:33.761272 kubelet[2777]: I0116 21:11:33.760841 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78a6b129-9463-4511-88e8-6577e211c6bb-xtables-lock\") pod \"calico-node-wctp9\" (UID: \"78a6b129-9463-4511-88e8-6577e211c6bb\") " pod="calico-system/calico-node-wctp9" Jan 16 21:11:33.761272 kubelet[2777]: I0116 21:11:33.760923 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78a6b129-9463-4511-88e8-6577e211c6bb-lib-modules\") pod \"calico-node-wctp9\" (UID: \"78a6b129-9463-4511-88e8-6577e211c6bb\") " pod="calico-system/calico-node-wctp9" Jan 16 21:11:33.761272 kubelet[2777]: I0116 21:11:33.760951 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/78a6b129-9463-4511-88e8-6577e211c6bb-policysync\") pod \"calico-node-wctp9\" (UID: \"78a6b129-9463-4511-88e8-6577e211c6bb\") " pod="calico-system/calico-node-wctp9" Jan 16 21:11:33.761272 kubelet[2777]: I0116 21:11:33.760977 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78a6b129-9463-4511-88e8-6577e211c6bb-tigera-ca-bundle\") pod \"calico-node-wctp9\" (UID: \"78a6b129-9463-4511-88e8-6577e211c6bb\") " pod="calico-system/calico-node-wctp9" Jan 16 21:11:33.761272 kubelet[2777]: I0116 21:11:33.761012 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/78a6b129-9463-4511-88e8-6577e211c6bb-node-certs\") pod \"calico-node-wctp9\" (UID: \"78a6b129-9463-4511-88e8-6577e211c6bb\") " pod="calico-system/calico-node-wctp9" Jan 16 21:11:33.761692 kubelet[2777]: I0116 21:11:33.761043 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnw8w\" (UniqueName: \"kubernetes.io/projected/78a6b129-9463-4511-88e8-6577e211c6bb-kube-api-access-qnw8w\") pod \"calico-node-wctp9\" (UID: \"78a6b129-9463-4511-88e8-6577e211c6bb\") " pod="calico-system/calico-node-wctp9" Jan 16 21:11:33.761692 kubelet[2777]: I0116 21:11:33.761086 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/78a6b129-9463-4511-88e8-6577e211c6bb-cni-net-dir\") pod \"calico-node-wctp9\" (UID: \"78a6b129-9463-4511-88e8-6577e211c6bb\") " pod="calico-system/calico-node-wctp9" Jan 16 21:11:33.761692 kubelet[2777]: I0116 21:11:33.761113 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/78a6b129-9463-4511-88e8-6577e211c6bb-var-lib-calico\") pod \"calico-node-wctp9\" (UID: \"78a6b129-9463-4511-88e8-6577e211c6bb\") " pod="calico-system/calico-node-wctp9" Jan 16 21:11:33.761692 kubelet[2777]: I0116 21:11:33.761143 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/78a6b129-9463-4511-88e8-6577e211c6bb-cni-bin-dir\") pod \"calico-node-wctp9\" (UID: \"78a6b129-9463-4511-88e8-6577e211c6bb\") " pod="calico-system/calico-node-wctp9" Jan 16 21:11:33.761692 kubelet[2777]: I0116 21:11:33.761170 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/78a6b129-9463-4511-88e8-6577e211c6bb-cni-log-dir\") pod \"calico-node-wctp9\" (UID: \"78a6b129-9463-4511-88e8-6577e211c6bb\") " pod="calico-system/calico-node-wctp9" Jan 16 21:11:33.762605 kubelet[2777]: I0116 21:11:33.761195 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/78a6b129-9463-4511-88e8-6577e211c6bb-var-run-calico\") pod \"calico-node-wctp9\" (UID: \"78a6b129-9463-4511-88e8-6577e211c6bb\") " pod="calico-system/calico-node-wctp9" Jan 16 21:11:33.865471 kubelet[2777]: E0116 21:11:33.865337 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:33.865471 kubelet[2777]: W0116 21:11:33.865368 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:33.866257 kubelet[2777]: E0116 21:11:33.866217 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:33.866828 kubelet[2777]: E0116 21:11:33.866719 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:33.866828 kubelet[2777]: W0116 21:11:33.866762 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:33.866828 kubelet[2777]: E0116 21:11:33.866785 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:33.867848 kubelet[2777]: E0116 21:11:33.867820 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:33.867848 kubelet[2777]: W0116 21:11:33.867843 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:33.868290 kubelet[2777]: E0116 21:11:33.867862 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:33.868478 kubelet[2777]: E0116 21:11:33.868327 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:33.868478 kubelet[2777]: W0116 21:11:33.868342 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:33.868725 kubelet[2777]: E0116 21:11:33.868492 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:33.869111 kubelet[2777]: E0116 21:11:33.869071 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:33.869212 kubelet[2777]: W0116 21:11:33.869145 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:33.869212 kubelet[2777]: E0116 21:11:33.869162 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:33.871769 kubelet[2777]: E0116 21:11:33.869561 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:33.871769 kubelet[2777]: W0116 21:11:33.869583 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:33.871769 kubelet[2777]: E0116 21:11:33.869598 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:33.871769 kubelet[2777]: E0116 21:11:33.871579 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:33.871769 kubelet[2777]: W0116 21:11:33.871598 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:33.871769 kubelet[2777]: E0116 21:11:33.871621 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:33.876293 kubelet[2777]: E0116 21:11:33.875911 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:33.876293 kubelet[2777]: W0116 21:11:33.875955 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:33.876293 kubelet[2777]: E0116 21:11:33.875994 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:33.882783 kubelet[2777]: E0116 21:11:33.878834 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:33.882783 kubelet[2777]: E0116 21:11:33.879877 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:33.882783 kubelet[2777]: W0116 21:11:33.879906 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:33.882783 kubelet[2777]: E0116 21:11:33.879943 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:33.882783 kubelet[2777]: E0116 21:11:33.882341 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:33.882783 kubelet[2777]: W0116 21:11:33.882376 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:33.882783 kubelet[2777]: E0116 21:11:33.882410 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:33.883281 containerd[1584]: time="2026-01-16T21:11:33.881193691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-586699f554-kr82c,Uid:3ad772a5-9658-49b7-97bb-f4e27fee78fc,Namespace:calico-system,Attempt:0,}" Jan 16 21:11:33.883765 kubelet[2777]: E0116 21:11:33.882915 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:33.883765 kubelet[2777]: W0116 21:11:33.882934 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:33.883765 kubelet[2777]: E0116 21:11:33.882959 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:33.885452 kubelet[2777]: E0116 21:11:33.884628 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:33.885452 kubelet[2777]: W0116 21:11:33.884661 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:33.885452 kubelet[2777]: E0116 21:11:33.884708 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:33.886232 kubelet[2777]: E0116 21:11:33.886202 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:33.886913 kubelet[2777]: W0116 21:11:33.886832 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:33.889120 kubelet[2777]: E0116 21:11:33.887241 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:33.889120 kubelet[2777]: E0116 21:11:33.887478 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:33.889120 kubelet[2777]: W0116 21:11:33.887503 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:33.889120 kubelet[2777]: E0116 21:11:33.887819 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:33.890989 kubelet[2777]: E0116 21:11:33.889988 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:33.890989 kubelet[2777]: W0116 21:11:33.890819 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:33.890989 kubelet[2777]: E0116 21:11:33.890863 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:33.894884 kubelet[2777]: E0116 21:11:33.894818 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:33.895401 kubelet[2777]: W0116 21:11:33.895014 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:33.895401 kubelet[2777]: E0116 21:11:33.895057 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:33.896075 kubelet[2777]: E0116 21:11:33.895890 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:33.896276 kubelet[2777]: W0116 21:11:33.896249 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:33.896964 kubelet[2777]: E0116 21:11:33.896886 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:33.908753 kubelet[2777]: E0116 21:11:33.908508 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:33.908753 kubelet[2777]: W0116 21:11:33.908543 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:33.908753 kubelet[2777]: E0116 21:11:33.908573 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:33.960313 containerd[1584]: time="2026-01-16T21:11:33.960171935Z" level=info msg="connecting to shim 65938730f8e4605e52871cf337151946c3764b787f037bdcfe594701f8d6d3e5" address="unix:///run/containerd/s/47d8ba8892baa84e26a5a2bd6257a4989a1a42c66095b89078f0b1f19b642f7a" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:11:34.021508 systemd[1]: Started cri-containerd-65938730f8e4605e52871cf337151946c3764b787f037bdcfe594701f8d6d3e5.scope - libcontainer container 65938730f8e4605e52871cf337151946c3764b787f037bdcfe594701f8d6d3e5. Jan 16 21:11:34.043584 kubelet[2777]: E0116 21:11:34.043537 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:34.046944 containerd[1584]: time="2026-01-16T21:11:34.046648782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wctp9,Uid:78a6b129-9463-4511-88e8-6577e211c6bb,Namespace:calico-system,Attempt:0,}" Jan 16 21:11:34.051965 kubelet[2777]: E0116 21:11:34.051296 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqdjm" podUID="948ce78a-6a96-42a6-a8f0-360a2ec834df" Jan 16 21:11:34.052841 kubelet[2777]: E0116 21:11:34.052803 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.052841 kubelet[2777]: W0116 21:11:34.052831 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.052965 kubelet[2777]: E0116 21:11:34.052854 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.053615 kubelet[2777]: E0116 21:11:34.053452 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.053615 kubelet[2777]: W0116 21:11:34.053590 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.053615 kubelet[2777]: E0116 21:11:34.053606 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.054140 kubelet[2777]: E0116 21:11:34.054111 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.054140 kubelet[2777]: W0116 21:11:34.054131 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.054215 kubelet[2777]: E0116 21:11:34.054151 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.054952 kubelet[2777]: E0116 21:11:34.054887 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.054952 kubelet[2777]: W0116 21:11:34.054945 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.054952 kubelet[2777]: E0116 21:11:34.054959 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.055684 kubelet[2777]: E0116 21:11:34.055661 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.055684 kubelet[2777]: W0116 21:11:34.055684 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.055886 kubelet[2777]: E0116 21:11:34.055701 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.056563 kubelet[2777]: E0116 21:11:34.056412 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.056563 kubelet[2777]: W0116 21:11:34.056436 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.056563 kubelet[2777]: E0116 21:11:34.056449 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.057195 kubelet[2777]: E0116 21:11:34.057009 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.057195 kubelet[2777]: W0116 21:11:34.057028 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.057195 kubelet[2777]: E0116 21:11:34.057041 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.057958 kubelet[2777]: E0116 21:11:34.057931 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.057958 kubelet[2777]: W0116 21:11:34.057954 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.058118 kubelet[2777]: E0116 21:11:34.057970 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.059719 kubelet[2777]: E0116 21:11:34.059032 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.059719 kubelet[2777]: W0116 21:11:34.059058 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.059719 kubelet[2777]: E0116 21:11:34.059076 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.060573 kubelet[2777]: E0116 21:11:34.060536 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.060573 kubelet[2777]: W0116 21:11:34.060566 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.060678 kubelet[2777]: E0116 21:11:34.060593 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.060851 kubelet[2777]: E0116 21:11:34.060831 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.060851 kubelet[2777]: W0116 21:11:34.060845 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.061090 kubelet[2777]: E0116 21:11:34.060855 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.061272 kubelet[2777]: E0116 21:11:34.061252 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.061272 kubelet[2777]: W0116 21:11:34.061269 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.061341 kubelet[2777]: E0116 21:11:34.061284 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.062532 kubelet[2777]: E0116 21:11:34.062504 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.062532 kubelet[2777]: W0116 21:11:34.062525 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.062532 kubelet[2777]: E0116 21:11:34.062538 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.062948 kubelet[2777]: E0116 21:11:34.062877 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.062948 kubelet[2777]: W0116 21:11:34.062896 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.062948 kubelet[2777]: E0116 21:11:34.062908 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.064677 kubelet[2777]: E0116 21:11:34.064381 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.064677 kubelet[2777]: W0116 21:11:34.064405 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.064677 kubelet[2777]: E0116 21:11:34.064422 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.064677 kubelet[2777]: E0116 21:11:34.064640 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.064677 kubelet[2777]: W0116 21:11:34.064651 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.064677 kubelet[2777]: E0116 21:11:34.064664 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.066198 kubelet[2777]: E0116 21:11:34.066165 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.066261 kubelet[2777]: W0116 21:11:34.066216 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.066261 kubelet[2777]: E0116 21:11:34.066239 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.071596 kubelet[2777]: E0116 21:11:34.069918 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.071596 kubelet[2777]: W0116 21:11:34.069952 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.071596 kubelet[2777]: E0116 21:11:34.069982 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.071596 kubelet[2777]: E0116 21:11:34.070931 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.071596 kubelet[2777]: W0116 21:11:34.070951 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.071596 kubelet[2777]: E0116 21:11:34.070974 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.071995 kubelet[2777]: E0116 21:11:34.071892 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.071995 kubelet[2777]: W0116 21:11:34.071907 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.071995 kubelet[2777]: E0116 21:11:34.071922 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.073588 kubelet[2777]: E0116 21:11:34.072835 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.073588 kubelet[2777]: W0116 21:11:34.072860 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.073588 kubelet[2777]: E0116 21:11:34.072879 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.073588 kubelet[2777]: I0116 21:11:34.072917 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/948ce78a-6a96-42a6-a8f0-360a2ec834df-varrun\") pod \"csi-node-driver-fqdjm\" (UID: \"948ce78a-6a96-42a6-a8f0-360a2ec834df\") " pod="calico-system/csi-node-driver-fqdjm" Jan 16 21:11:34.073588 kubelet[2777]: E0116 21:11:34.073161 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.073588 kubelet[2777]: W0116 21:11:34.073177 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.073588 kubelet[2777]: E0116 21:11:34.073192 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.073588 kubelet[2777]: I0116 21:11:34.073212 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/948ce78a-6a96-42a6-a8f0-360a2ec834df-registration-dir\") pod \"csi-node-driver-fqdjm\" (UID: \"948ce78a-6a96-42a6-a8f0-360a2ec834df\") " pod="calico-system/csi-node-driver-fqdjm" Jan 16 21:11:34.076955 kubelet[2777]: E0116 21:11:34.074946 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.076955 kubelet[2777]: W0116 21:11:34.074967 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.076955 kubelet[2777]: E0116 21:11:34.075008 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.076955 kubelet[2777]: I0116 21:11:34.075244 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/948ce78a-6a96-42a6-a8f0-360a2ec834df-socket-dir\") pod \"csi-node-driver-fqdjm\" (UID: \"948ce78a-6a96-42a6-a8f0-360a2ec834df\") " pod="calico-system/csi-node-driver-fqdjm" Jan 16 21:11:34.076955 kubelet[2777]: E0116 21:11:34.075330 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.076955 kubelet[2777]: W0116 21:11:34.075342 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.076955 kubelet[2777]: E0116 21:11:34.075361 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.076955 kubelet[2777]: E0116 21:11:34.075519 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.076955 kubelet[2777]: W0116 21:11:34.075527 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.077200 kubelet[2777]: E0116 21:11:34.075536 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.077200 kubelet[2777]: E0116 21:11:34.075706 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.077200 kubelet[2777]: W0116 21:11:34.075713 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.077200 kubelet[2777]: E0116 21:11:34.075723 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.077200 kubelet[2777]: E0116 21:11:34.076573 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.077200 kubelet[2777]: W0116 21:11:34.076588 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.077200 kubelet[2777]: E0116 21:11:34.076605 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.077200 kubelet[2777]: E0116 21:11:34.076959 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.077200 kubelet[2777]: W0116 21:11:34.076970 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.077625 kubelet[2777]: E0116 21:11:34.077597 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.079015 kubelet[2777]: I0116 21:11:34.077640 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dctkk\" (UniqueName: \"kubernetes.io/projected/948ce78a-6a96-42a6-a8f0-360a2ec834df-kube-api-access-dctkk\") pod \"csi-node-driver-fqdjm\" (UID: \"948ce78a-6a96-42a6-a8f0-360a2ec834df\") " pod="calico-system/csi-node-driver-fqdjm" Jan 16 21:11:34.079015 kubelet[2777]: E0116 21:11:34.078024 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.079015 kubelet[2777]: W0116 21:11:34.078038 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.079015 kubelet[2777]: E0116 21:11:34.078061 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.079015 kubelet[2777]: E0116 21:11:34.078563 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.079015 kubelet[2777]: W0116 21:11:34.078572 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.079015 kubelet[2777]: E0116 21:11:34.078583 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.079015 kubelet[2777]: E0116 21:11:34.078989 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.079015 kubelet[2777]: W0116 21:11:34.078997 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.079248 kubelet[2777]: E0116 21:11:34.079222 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.079274 kubelet[2777]: I0116 21:11:34.079259 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/948ce78a-6a96-42a6-a8f0-360a2ec834df-kubelet-dir\") pod \"csi-node-driver-fqdjm\" (UID: \"948ce78a-6a96-42a6-a8f0-360a2ec834df\") " pod="calico-system/csi-node-driver-fqdjm" Jan 16 21:11:34.082169 kubelet[2777]: E0116 21:11:34.079839 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.082169 kubelet[2777]: W0116 21:11:34.079857 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.082169 kubelet[2777]: E0116 21:11:34.080236 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.082169 kubelet[2777]: E0116 21:11:34.080508 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.082169 kubelet[2777]: W0116 21:11:34.080519 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.082169 kubelet[2777]: E0116 21:11:34.080539 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.082169 kubelet[2777]: E0116 21:11:34.081521 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.082169 kubelet[2777]: W0116 21:11:34.081627 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.082169 kubelet[2777]: E0116 21:11:34.081647 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.082169 kubelet[2777]: E0116 21:11:34.082076 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.082615 kubelet[2777]: W0116 21:11:34.082088 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.082615 kubelet[2777]: E0116 21:11:34.082100 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.154458 containerd[1584]: time="2026-01-16T21:11:34.154403496Z" level=info msg="connecting to shim 06437600ff3ddf1cecd53c58b50e40b56d1554a780972e43d604c75923e8dca0" address="unix:///run/containerd/s/717ef4aa002f06ed68381b1b6f44c295c83bfc03587da8acd5a575450081678f" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:11:34.156000 audit: BPF prog-id=149 op=LOAD Jan 16 21:11:34.159000 audit: BPF prog-id=150 op=LOAD Jan 16 21:11:34.159000 audit[3234]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00019e238 a2=98 a3=0 items=0 ppid=3223 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:34.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635393338373330663865343630356535323837316366333337313531 Jan 16 21:11:34.159000 audit: BPF prog-id=150 op=UNLOAD Jan 16 21:11:34.159000 audit[3234]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3223 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:34.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635393338373330663865343630356535323837316366333337313531 Jan 16 21:11:34.161000 audit: BPF prog-id=151 op=LOAD Jan 16 21:11:34.161000 audit[3234]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00019e488 a2=98 a3=0 items=0 ppid=3223 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:34.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635393338373330663865343630356535323837316366333337313531 Jan 16 21:11:34.161000 audit: BPF prog-id=152 op=LOAD Jan 16 21:11:34.161000 audit[3234]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00019e218 a2=98 a3=0 items=0 ppid=3223 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:34.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635393338373330663865343630356535323837316366333337313531 Jan 16 21:11:34.162000 audit: BPF prog-id=152 op=UNLOAD Jan 16 21:11:34.162000 audit[3234]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3223 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:34.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635393338373330663865343630356535323837316366333337313531 Jan 16 21:11:34.162000 audit: BPF prog-id=151 op=UNLOAD Jan 16 21:11:34.162000 audit[3234]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3223 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:34.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635393338373330663865343630356535323837316366333337313531 Jan 16 21:11:34.163000 audit: BPF prog-id=153 op=LOAD Jan 16 21:11:34.163000 audit[3234]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00019e6e8 a2=98 a3=0 items=0 ppid=3223 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:34.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635393338373330663865343630356535323837316366333337313531 Jan 16 21:11:34.180498 kubelet[2777]: E0116 21:11:34.180264 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.180498 kubelet[2777]: W0116 21:11:34.180295 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.180498 kubelet[2777]: E0116 21:11:34.180328 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.182201 kubelet[2777]: E0116 21:11:34.181967 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.182201 kubelet[2777]: W0116 21:11:34.181993 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.182201 kubelet[2777]: E0116 21:11:34.182028 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.182835 kubelet[2777]: E0116 21:11:34.182494 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.182835 kubelet[2777]: W0116 21:11:34.182513 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.182835 kubelet[2777]: E0116 21:11:34.182556 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.183987 kubelet[2777]: E0116 21:11:34.183843 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.183987 kubelet[2777]: W0116 21:11:34.183869 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.183987 kubelet[2777]: E0116 21:11:34.183918 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.184497 kubelet[2777]: E0116 21:11:34.184477 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.184608 kubelet[2777]: W0116 21:11:34.184592 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.184833 kubelet[2777]: E0116 21:11:34.184798 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.186174 kubelet[2777]: E0116 21:11:34.186144 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.186966 kubelet[2777]: W0116 21:11:34.186814 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.186966 kubelet[2777]: E0116 21:11:34.186909 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.187110 kubelet[2777]: E0116 21:11:34.187099 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.187152 kubelet[2777]: W0116 21:11:34.187144 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.187234 kubelet[2777]: E0116 21:11:34.187208 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.187564 kubelet[2777]: E0116 21:11:34.187548 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.188128 kubelet[2777]: W0116 21:11:34.187976 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.188128 kubelet[2777]: E0116 21:11:34.188037 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.188264 kubelet[2777]: E0116 21:11:34.188252 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.188307 kubelet[2777]: W0116 21:11:34.188299 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.188918 kubelet[2777]: E0116 21:11:34.188388 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.188918 kubelet[2777]: E0116 21:11:34.188798 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.188918 kubelet[2777]: W0116 21:11:34.188816 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.188918 kubelet[2777]: E0116 21:11:34.188841 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.189650 kubelet[2777]: E0116 21:11:34.189525 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.189650 kubelet[2777]: W0116 21:11:34.189558 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.190882 kubelet[2777]: E0116 21:11:34.189906 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.190882 kubelet[2777]: E0116 21:11:34.190829 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.191240 kubelet[2777]: W0116 21:11:34.191107 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.191240 kubelet[2777]: E0116 21:11:34.191163 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.191826 kubelet[2777]: E0116 21:11:34.191681 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.191826 kubelet[2777]: W0116 21:11:34.191700 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.191826 kubelet[2777]: E0116 21:11:34.191794 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.192159 kubelet[2777]: E0116 21:11:34.191975 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.192844 kubelet[2777]: W0116 21:11:34.192814 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.192968 kubelet[2777]: E0116 21:11:34.192912 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.193421 kubelet[2777]: E0116 21:11:34.193396 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.193721 kubelet[2777]: W0116 21:11:34.193422 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.193861 kubelet[2777]: E0116 21:11:34.193830 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.195026 kubelet[2777]: E0116 21:11:34.194997 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.195026 kubelet[2777]: W0116 21:11:34.195022 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.195228 kubelet[2777]: E0116 21:11:34.195087 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.195916 kubelet[2777]: E0116 21:11:34.195872 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.195916 kubelet[2777]: W0116 21:11:34.195909 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.196105 kubelet[2777]: E0116 21:11:34.196056 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.196898 kubelet[2777]: E0116 21:11:34.196873 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.196898 kubelet[2777]: W0116 21:11:34.196896 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.198514 kubelet[2777]: E0116 21:11:34.197859 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.201232 kubelet[2777]: E0116 21:11:34.200810 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.201232 kubelet[2777]: W0116 21:11:34.200856 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.201232 kubelet[2777]: E0116 21:11:34.201059 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.202235 kubelet[2777]: E0116 21:11:34.202201 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.202323 kubelet[2777]: W0116 21:11:34.202236 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.202752 kubelet[2777]: E0116 21:11:34.202705 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.205905 kubelet[2777]: E0116 21:11:34.203701 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.205905 kubelet[2777]: W0116 21:11:34.203759 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.205905 kubelet[2777]: E0116 21:11:34.204594 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.205905 kubelet[2777]: W0116 21:11:34.204613 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.205905 kubelet[2777]: E0116 21:11:34.205209 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.205905 kubelet[2777]: W0116 21:11:34.205235 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.205905 kubelet[2777]: E0116 21:11:34.205259 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.209852 kubelet[2777]: E0116 21:11:34.207237 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.210366 kubelet[2777]: E0116 21:11:34.210186 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.210366 kubelet[2777]: W0116 21:11:34.210224 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.210366 kubelet[2777]: E0116 21:11:34.210260 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.210366 kubelet[2777]: E0116 21:11:34.210334 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.218971 kubelet[2777]: E0116 21:11:34.218925 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.218971 kubelet[2777]: W0116 21:11:34.218960 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.219214 kubelet[2777]: E0116 21:11:34.218996 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.231665 systemd[1]: Started cri-containerd-06437600ff3ddf1cecd53c58b50e40b56d1554a780972e43d604c75923e8dca0.scope - libcontainer container 06437600ff3ddf1cecd53c58b50e40b56d1554a780972e43d604c75923e8dca0. Jan 16 21:11:34.259705 kubelet[2777]: E0116 21:11:34.259664 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:34.259705 kubelet[2777]: W0116 21:11:34.259698 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:34.260010 kubelet[2777]: E0116 21:11:34.259724 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:34.324855 containerd[1584]: time="2026-01-16T21:11:34.323628960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-586699f554-kr82c,Uid:3ad772a5-9658-49b7-97bb-f4e27fee78fc,Namespace:calico-system,Attempt:0,} returns sandbox id \"65938730f8e4605e52871cf337151946c3764b787f037bdcfe594701f8d6d3e5\"" Jan 16 21:11:34.330772 kubelet[2777]: E0116 21:11:34.328844 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:34.337436 containerd[1584]: time="2026-01-16T21:11:34.337069474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 16 21:11:34.345763 systemd[1]: Started sshd@7-137.184.190.135:22-165.154.236.241:55916.service - OpenSSH per-connection server daemon (165.154.236.241:55916). Jan 16 21:11:34.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-137.184.190.135:22-165.154.236.241:55916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:11:34.422000 audit: BPF prog-id=154 op=LOAD Jan 16 21:11:34.423000 audit: BPF prog-id=155 op=LOAD Jan 16 21:11:34.423000 audit[3321]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3308 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:34.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036343337363030666633646466316365636435336335386235306534 Jan 16 21:11:34.424000 audit: BPF prog-id=155 op=UNLOAD Jan 16 21:11:34.424000 audit[3321]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3308 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:34.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036343337363030666633646466316365636435336335386235306534 Jan 16 21:11:34.424000 audit: BPF prog-id=156 op=LOAD Jan 16 21:11:34.424000 audit[3321]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3308 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:34.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036343337363030666633646466316365636435336335386235306534 Jan 16 21:11:34.424000 audit: BPF prog-id=157 op=LOAD Jan 16 21:11:34.424000 audit[3321]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3308 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:34.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036343337363030666633646466316365636435336335386235306534 Jan 16 21:11:34.424000 audit: BPF prog-id=157 op=UNLOAD Jan 16 21:11:34.424000 audit[3321]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3308 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:34.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036343337363030666633646466316365636435336335386235306534 Jan 16 21:11:34.424000 audit: BPF prog-id=156 op=UNLOAD Jan 16 21:11:34.424000 audit[3321]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3308 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:34.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036343337363030666633646466316365636435336335386235306534 Jan 16 21:11:34.424000 audit: BPF prog-id=158 op=LOAD Jan 16 21:11:34.424000 audit[3321]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3308 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:34.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036343337363030666633646466316365636435336335386235306534 Jan 16 21:11:34.437078 sshd[3370]: Connection closed by 165.154.236.241 port 55916 Jan 16 21:11:34.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-137.184.190.135:22-165.154.236.241:55916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:11:34.439789 systemd[1]: sshd@7-137.184.190.135:22-165.154.236.241:55916.service: Deactivated successfully. Jan 16 21:11:34.510765 containerd[1584]: time="2026-01-16T21:11:34.510712415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wctp9,Uid:78a6b129-9463-4511-88e8-6577e211c6bb,Namespace:calico-system,Attempt:0,} returns sandbox id \"06437600ff3ddf1cecd53c58b50e40b56d1554a780972e43d604c75923e8dca0\"" Jan 16 21:11:34.513123 kubelet[2777]: E0116 21:11:34.513083 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:34.609000 audit[3386]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:11:34.609000 audit[3386]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fffacbf6880 a2=0 a3=7fffacbf686c items=0 ppid=2909 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:34.609000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:34.613000 audit[3386]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:11:34.613000 audit[3386]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffacbf6880 a2=0 a3=0 items=0 ppid=2909 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:34.613000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:35.530760 kubelet[2777]: E0116 21:11:35.530663 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqdjm" podUID="948ce78a-6a96-42a6-a8f0-360a2ec834df" Jan 16 21:11:36.098057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount793168123.mount: Deactivated successfully. Jan 16 21:11:36.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-137.184.190.135:22-165.154.236.241:53413 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:11:36.927548 systemd[1]: Started sshd@8-137.184.190.135:22-165.154.236.241:53413.service - OpenSSH per-connection server daemon (165.154.236.241:53413). Jan 16 21:11:36.930210 kernel: kauditd_printk_skb: 56 callbacks suppressed Jan 16 21:11:36.930338 kernel: audit: type=1130 audit(1768597896.927:550): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-137.184.190.135:22-165.154.236.241:53413 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:11:37.175438 sshd[3397]: Unable to negotiate with 165.154.236.241 port 53413: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] Jan 16 21:11:37.178591 systemd[1]: sshd@8-137.184.190.135:22-165.154.236.241:53413.service: Deactivated successfully. Jan 16 21:11:37.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-137.184.190.135:22-165.154.236.241:53413 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:11:37.188335 kernel: audit: type=1131 audit(1768597897.178:551): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-137.184.190.135:22-165.154.236.241:53413 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:11:37.530638 kubelet[2777]: E0116 21:11:37.530292 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqdjm" podUID="948ce78a-6a96-42a6-a8f0-360a2ec834df" Jan 16 21:11:37.651208 containerd[1584]: time="2026-01-16T21:11:37.651023456Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:11:37.652812 containerd[1584]: time="2026-01-16T21:11:37.652747644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 16 21:11:37.655919 containerd[1584]: time="2026-01-16T21:11:37.655851944Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:11:37.658199 containerd[1584]: time="2026-01-16T21:11:37.658144479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:11:37.664842 containerd[1584]: time="2026-01-16T21:11:37.664780845Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.327655418s" Jan 16 21:11:37.664842 containerd[1584]: time="2026-01-16T21:11:37.664827786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 16 21:11:37.667070 containerd[1584]: time="2026-01-16T21:11:37.667012161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 16 21:11:37.687978 containerd[1584]: time="2026-01-16T21:11:37.687921630Z" level=info msg="CreateContainer within sandbox \"65938730f8e4605e52871cf337151946c3764b787f037bdcfe594701f8d6d3e5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 16 21:11:37.698584 containerd[1584]: time="2026-01-16T21:11:37.698526873Z" level=info msg="Container bac34b8c74904a00085c4f0b31b25b8db27b240fb9e03418fbab28154b7febfb: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:11:37.708538 containerd[1584]: time="2026-01-16T21:11:37.708463512Z" level=info msg="CreateContainer within sandbox \"65938730f8e4605e52871cf337151946c3764b787f037bdcfe594701f8d6d3e5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bac34b8c74904a00085c4f0b31b25b8db27b240fb9e03418fbab28154b7febfb\"" Jan 16 21:11:37.711371 containerd[1584]: time="2026-01-16T21:11:37.710010021Z" level=info msg="StartContainer for \"bac34b8c74904a00085c4f0b31b25b8db27b240fb9e03418fbab28154b7febfb\"" Jan 16 21:11:37.711371 containerd[1584]: time="2026-01-16T21:11:37.711312445Z" level=info msg="connecting to shim bac34b8c74904a00085c4f0b31b25b8db27b240fb9e03418fbab28154b7febfb" address="unix:///run/containerd/s/47d8ba8892baa84e26a5a2bd6257a4989a1a42c66095b89078f0b1f19b642f7a" protocol=ttrpc version=3 Jan 16 21:11:37.748268 systemd[1]: Started cri-containerd-bac34b8c74904a00085c4f0b31b25b8db27b240fb9e03418fbab28154b7febfb.scope - libcontainer container bac34b8c74904a00085c4f0b31b25b8db27b240fb9e03418fbab28154b7febfb. Jan 16 21:11:37.768000 audit: BPF prog-id=159 op=LOAD Jan 16 21:11:37.769000 audit: BPF prog-id=160 op=LOAD Jan 16 21:11:37.772760 kernel: audit: type=1334 audit(1768597897.768:552): prog-id=159 op=LOAD Jan 16 21:11:37.772847 kernel: audit: type=1334 audit(1768597897.769:553): prog-id=160 op=LOAD Jan 16 21:11:37.769000 audit[3404]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3223 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:37.776817 kernel: audit: type=1300 audit(1768597897.769:553): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3223 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:37.769000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261633334623863373439303461303030383563346630623331623235 Jan 16 21:11:37.787783 kernel: audit: type=1327 audit(1768597897.769:553): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261633334623863373439303461303030383563346630623331623235 Jan 16 21:11:37.789792 kernel: audit: type=1334 audit(1768597897.769:554): prog-id=160 op=UNLOAD Jan 16 21:11:37.769000 audit: BPF prog-id=160 op=UNLOAD Jan 16 21:11:37.769000 audit[3404]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3223 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:37.769000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261633334623863373439303461303030383563346630623331623235 Jan 16 21:11:37.797379 kernel: audit: type=1300 audit(1768597897.769:554): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3223 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:37.797551 kernel: audit: type=1327 audit(1768597897.769:554): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261633334623863373439303461303030383563346630623331623235 Jan 16 21:11:37.771000 audit: BPF prog-id=161 op=LOAD Jan 16 21:11:37.802130 kernel: audit: type=1334 audit(1768597897.771:555): prog-id=161 op=LOAD Jan 16 21:11:37.771000 audit[3404]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3223 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:37.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261633334623863373439303461303030383563346630623331623235 Jan 16 21:11:37.771000 audit: BPF prog-id=162 op=LOAD Jan 16 21:11:37.771000 audit[3404]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3223 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:37.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261633334623863373439303461303030383563346630623331623235 Jan 16 21:11:37.771000 audit: BPF prog-id=162 op=UNLOAD Jan 16 21:11:37.771000 audit[3404]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3223 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:37.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261633334623863373439303461303030383563346630623331623235 Jan 16 21:11:37.771000 audit: BPF prog-id=161 op=UNLOAD Jan 16 21:11:37.771000 audit[3404]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3223 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:37.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261633334623863373439303461303030383563346630623331623235 Jan 16 21:11:37.771000 audit: BPF prog-id=163 op=LOAD Jan 16 21:11:37.771000 audit[3404]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3223 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:37.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261633334623863373439303461303030383563346630623331623235 Jan 16 21:11:37.841201 containerd[1584]: time="2026-01-16T21:11:37.841072507Z" level=info msg="StartContainer for \"bac34b8c74904a00085c4f0b31b25b8db27b240fb9e03418fbab28154b7febfb\" returns successfully" Jan 16 21:11:38.754233 kubelet[2777]: E0116 21:11:38.754131 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:38.774090 kubelet[2777]: I0116 21:11:38.773647 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-586699f554-kr82c" podStartSLOduration=2.438730175 podStartE2EDuration="5.773618064s" podCreationTimestamp="2026-01-16 21:11:33 +0000 UTC" firstStartedPulling="2026-01-16 21:11:34.331656568 +0000 UTC m=+26.062628054" lastFinishedPulling="2026-01-16 21:11:37.666544459 +0000 UTC m=+29.397515943" observedRunningTime="2026-01-16 21:11:38.771016033 +0000 UTC m=+30.501987534" watchObservedRunningTime="2026-01-16 21:11:38.773618064 +0000 UTC m=+30.504589582" Jan 16 21:11:38.807300 kubelet[2777]: E0116 21:11:38.807257 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.807300 kubelet[2777]: W0116 21:11:38.807285 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.807300 kubelet[2777]: E0116 21:11:38.807309 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.807708 kubelet[2777]: E0116 21:11:38.807527 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.807708 kubelet[2777]: W0116 21:11:38.807538 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.807708 kubelet[2777]: E0116 21:11:38.807553 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.807864 kubelet[2777]: E0116 21:11:38.807759 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.807864 kubelet[2777]: W0116 21:11:38.807774 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.807864 kubelet[2777]: E0116 21:11:38.807786 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.808034 kubelet[2777]: E0116 21:11:38.807997 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.808034 kubelet[2777]: W0116 21:11:38.808006 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.808034 kubelet[2777]: E0116 21:11:38.808020 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.808294 kubelet[2777]: E0116 21:11:38.808284 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.808469 kubelet[2777]: W0116 21:11:38.808296 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.808469 kubelet[2777]: E0116 21:11:38.808309 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.808689 kubelet[2777]: E0116 21:11:38.808484 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.808689 kubelet[2777]: W0116 21:11:38.808494 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.808689 kubelet[2777]: E0116 21:11:38.808506 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.808918 kubelet[2777]: E0116 21:11:38.808698 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.808918 kubelet[2777]: W0116 21:11:38.808706 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.808918 kubelet[2777]: E0116 21:11:38.808716 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.808918 kubelet[2777]: E0116 21:11:38.808909 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.808918 kubelet[2777]: W0116 21:11:38.808917 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.809091 kubelet[2777]: E0116 21:11:38.808928 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.809133 kubelet[2777]: E0116 21:11:38.809111 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.809133 kubelet[2777]: W0116 21:11:38.809119 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.809133 kubelet[2777]: E0116 21:11:38.809127 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.809300 kubelet[2777]: E0116 21:11:38.809275 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.809300 kubelet[2777]: W0116 21:11:38.809284 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.809300 kubelet[2777]: E0116 21:11:38.809295 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.809449 kubelet[2777]: E0116 21:11:38.809426 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.809449 kubelet[2777]: W0116 21:11:38.809440 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.809449 kubelet[2777]: E0116 21:11:38.809447 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.810179 kubelet[2777]: E0116 21:11:38.809758 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.810179 kubelet[2777]: W0116 21:11:38.809771 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.810179 kubelet[2777]: E0116 21:11:38.809781 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.810179 kubelet[2777]: E0116 21:11:38.809958 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.810179 kubelet[2777]: W0116 21:11:38.809966 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.810179 kubelet[2777]: E0116 21:11:38.809980 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.810179 kubelet[2777]: E0116 21:11:38.810156 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.810179 kubelet[2777]: W0116 21:11:38.810165 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.810179 kubelet[2777]: E0116 21:11:38.810173 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.810648 kubelet[2777]: E0116 21:11:38.810384 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.810648 kubelet[2777]: W0116 21:11:38.810391 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.810648 kubelet[2777]: E0116 21:11:38.810401 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.825982 kubelet[2777]: E0116 21:11:38.825939 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.825982 kubelet[2777]: W0116 21:11:38.825969 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.825982 kubelet[2777]: E0116 21:11:38.825993 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.826377 kubelet[2777]: E0116 21:11:38.826292 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.826377 kubelet[2777]: W0116 21:11:38.826304 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.826377 kubelet[2777]: E0116 21:11:38.826327 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.826667 kubelet[2777]: E0116 21:11:38.826567 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.826667 kubelet[2777]: W0116 21:11:38.826576 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.826667 kubelet[2777]: E0116 21:11:38.826586 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.826884 kubelet[2777]: E0116 21:11:38.826822 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.826884 kubelet[2777]: W0116 21:11:38.826831 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.826884 kubelet[2777]: E0116 21:11:38.826847 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.827176 kubelet[2777]: E0116 21:11:38.827086 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.827176 kubelet[2777]: W0116 21:11:38.827093 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.827176 kubelet[2777]: E0116 21:11:38.827104 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.827284 kubelet[2777]: E0116 21:11:38.827230 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.827284 kubelet[2777]: W0116 21:11:38.827236 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.827284 kubelet[2777]: E0116 21:11:38.827244 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.827402 kubelet[2777]: E0116 21:11:38.827382 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.827402 kubelet[2777]: W0116 21:11:38.827388 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.827402 kubelet[2777]: E0116 21:11:38.827395 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.827704 kubelet[2777]: E0116 21:11:38.827690 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.827704 kubelet[2777]: W0116 21:11:38.827702 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.828001 kubelet[2777]: E0116 21:11:38.827856 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.828001 kubelet[2777]: E0116 21:11:38.827755 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.828001 kubelet[2777]: W0116 21:11:38.827863 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.828322 kubelet[2777]: E0116 21:11:38.828147 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.828525 kubelet[2777]: E0116 21:11:38.828479 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.828525 kubelet[2777]: W0116 21:11:38.828501 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.828798 kubelet[2777]: E0116 21:11:38.828653 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.828953 kubelet[2777]: E0116 21:11:38.828939 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.829095 kubelet[2777]: W0116 21:11:38.829012 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.829095 kubelet[2777]: E0116 21:11:38.829045 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.829547 kubelet[2777]: E0116 21:11:38.829404 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.829547 kubelet[2777]: W0116 21:11:38.829419 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.829547 kubelet[2777]: E0116 21:11:38.829441 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.829923 kubelet[2777]: E0116 21:11:38.829908 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.830003 kubelet[2777]: W0116 21:11:38.829991 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.830109 kubelet[2777]: E0116 21:11:38.830084 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.830449 kubelet[2777]: E0116 21:11:38.830411 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.830449 kubelet[2777]: W0116 21:11:38.830429 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.830764 kubelet[2777]: E0116 21:11:38.830592 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.830918 kubelet[2777]: E0116 21:11:38.830902 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.831006 kubelet[2777]: W0116 21:11:38.830993 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.831610 kubelet[2777]: E0116 21:11:38.831439 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.831844 kubelet[2777]: E0116 21:11:38.831829 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.831938 kubelet[2777]: W0116 21:11:38.831924 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.832237 kubelet[2777]: E0116 21:11:38.832016 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.832913 kubelet[2777]: E0116 21:11:38.832893 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.833012 kubelet[2777]: W0116 21:11:38.832997 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.833142 kubelet[2777]: E0116 21:11:38.833098 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:38.833567 kubelet[2777]: E0116 21:11:38.833534 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:11:38.833567 kubelet[2777]: W0116 21:11:38.833561 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:11:38.833704 kubelet[2777]: E0116 21:11:38.833580 2777 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:11:39.068153 containerd[1584]: time="2026-01-16T21:11:39.067978165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:11:39.070784 containerd[1584]: time="2026-01-16T21:11:39.069624056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 16 21:11:39.070980 containerd[1584]: time="2026-01-16T21:11:39.070934743Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:11:39.075808 containerd[1584]: time="2026-01-16T21:11:39.075668399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:11:39.077161 containerd[1584]: time="2026-01-16T21:11:39.076444321Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.409377426s" Jan 16 21:11:39.077161 containerd[1584]: time="2026-01-16T21:11:39.076507034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 16 21:11:39.083077 containerd[1584]: time="2026-01-16T21:11:39.083013629Z" level=info msg="CreateContainer within sandbox \"06437600ff3ddf1cecd53c58b50e40b56d1554a780972e43d604c75923e8dca0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 16 21:11:39.096854 containerd[1584]: time="2026-01-16T21:11:39.096053381Z" level=info msg="Container c68f3e8ef3f737fac8a0c9396984f009d001c6934154b09332992b00496800b4: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:11:39.110764 containerd[1584]: time="2026-01-16T21:11:39.110662940Z" level=info msg="CreateContainer within sandbox \"06437600ff3ddf1cecd53c58b50e40b56d1554a780972e43d604c75923e8dca0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c68f3e8ef3f737fac8a0c9396984f009d001c6934154b09332992b00496800b4\"" Jan 16 21:11:39.113024 containerd[1584]: time="2026-01-16T21:11:39.111524127Z" level=info msg="StartContainer for \"c68f3e8ef3f737fac8a0c9396984f009d001c6934154b09332992b00496800b4\"" Jan 16 21:11:39.114494 containerd[1584]: time="2026-01-16T21:11:39.114439805Z" level=info msg="connecting to shim c68f3e8ef3f737fac8a0c9396984f009d001c6934154b09332992b00496800b4" address="unix:///run/containerd/s/717ef4aa002f06ed68381b1b6f44c295c83bfc03587da8acd5a575450081678f" protocol=ttrpc version=3 Jan 16 21:11:39.152140 systemd[1]: Started cri-containerd-c68f3e8ef3f737fac8a0c9396984f009d001c6934154b09332992b00496800b4.scope - libcontainer container c68f3e8ef3f737fac8a0c9396984f009d001c6934154b09332992b00496800b4. Jan 16 21:11:39.217000 audit: BPF prog-id=164 op=LOAD Jan 16 21:11:39.217000 audit[3481]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3308 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:39.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336386633653865663366373337666163386130633933393639383466 Jan 16 21:11:39.217000 audit: BPF prog-id=165 op=LOAD Jan 16 21:11:39.217000 audit[3481]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3308 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:39.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336386633653865663366373337666163386130633933393639383466 Jan 16 21:11:39.217000 audit: BPF prog-id=165 op=UNLOAD Jan 16 21:11:39.217000 audit[3481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3308 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:39.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336386633653865663366373337666163386130633933393639383466 Jan 16 21:11:39.217000 audit: BPF prog-id=164 op=UNLOAD Jan 16 21:11:39.217000 audit[3481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3308 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:39.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336386633653865663366373337666163386130633933393639383466 Jan 16 21:11:39.217000 audit: BPF prog-id=166 op=LOAD Jan 16 21:11:39.217000 audit[3481]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3308 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:39.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336386633653865663366373337666163386130633933393639383466 Jan 16 21:11:39.260240 containerd[1584]: time="2026-01-16T21:11:39.260153219Z" level=info msg="StartContainer for \"c68f3e8ef3f737fac8a0c9396984f009d001c6934154b09332992b00496800b4\" returns successfully" Jan 16 21:11:39.283081 systemd[1]: cri-containerd-c68f3e8ef3f737fac8a0c9396984f009d001c6934154b09332992b00496800b4.scope: Deactivated successfully. Jan 16 21:11:39.284000 audit: BPF prog-id=166 op=UNLOAD Jan 16 21:11:39.312146 containerd[1584]: time="2026-01-16T21:11:39.311957791Z" level=info msg="received container exit event container_id:\"c68f3e8ef3f737fac8a0c9396984f009d001c6934154b09332992b00496800b4\" id:\"c68f3e8ef3f737fac8a0c9396984f009d001c6934154b09332992b00496800b4\" pid:3493 exited_at:{seconds:1768597899 nanos:282787814}" Jan 16 21:11:39.346377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c68f3e8ef3f737fac8a0c9396984f009d001c6934154b09332992b00496800b4-rootfs.mount: Deactivated successfully. Jan 16 21:11:39.531210 kubelet[2777]: E0116 21:11:39.531142 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqdjm" podUID="948ce78a-6a96-42a6-a8f0-360a2ec834df" Jan 16 21:11:39.760278 kubelet[2777]: I0116 21:11:39.760104 2777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 21:11:39.760864 kubelet[2777]: E0116 21:11:39.760467 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:39.761906 kubelet[2777]: E0116 21:11:39.761568 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:39.763256 containerd[1584]: time="2026-01-16T21:11:39.763187460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 16 21:11:41.532189 kubelet[2777]: E0116 21:11:41.530554 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqdjm" podUID="948ce78a-6a96-42a6-a8f0-360a2ec834df" Jan 16 21:11:43.090685 containerd[1584]: time="2026-01-16T21:11:43.090587801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:11:43.092284 containerd[1584]: time="2026-01-16T21:11:43.091610945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 16 21:11:43.093184 containerd[1584]: time="2026-01-16T21:11:43.093105825Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:11:43.098152 containerd[1584]: time="2026-01-16T21:11:43.097679748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:11:43.099705 containerd[1584]: time="2026-01-16T21:11:43.099650404Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.336400491s" Jan 16 21:11:43.100028 containerd[1584]: time="2026-01-16T21:11:43.100001460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 16 21:11:43.104559 containerd[1584]: time="2026-01-16T21:11:43.104497503Z" level=info msg="CreateContainer within sandbox \"06437600ff3ddf1cecd53c58b50e40b56d1554a780972e43d604c75923e8dca0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 16 21:11:43.127807 containerd[1584]: time="2026-01-16T21:11:43.125803076Z" level=info msg="Container f7a49783a6b18c11dba61fa8272c3f2884e948bfce3d5d723233a334bd12080b: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:11:43.141584 containerd[1584]: time="2026-01-16T21:11:43.141493608Z" level=info msg="CreateContainer within sandbox \"06437600ff3ddf1cecd53c58b50e40b56d1554a780972e43d604c75923e8dca0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f7a49783a6b18c11dba61fa8272c3f2884e948bfce3d5d723233a334bd12080b\"" Jan 16 21:11:43.142873 containerd[1584]: time="2026-01-16T21:11:43.142837786Z" level=info msg="StartContainer for \"f7a49783a6b18c11dba61fa8272c3f2884e948bfce3d5d723233a334bd12080b\"" Jan 16 21:11:43.152761 containerd[1584]: time="2026-01-16T21:11:43.152685850Z" level=info msg="connecting to shim f7a49783a6b18c11dba61fa8272c3f2884e948bfce3d5d723233a334bd12080b" address="unix:///run/containerd/s/717ef4aa002f06ed68381b1b6f44c295c83bfc03587da8acd5a575450081678f" protocol=ttrpc version=3 Jan 16 21:11:43.185352 systemd[1]: Started cri-containerd-f7a49783a6b18c11dba61fa8272c3f2884e948bfce3d5d723233a334bd12080b.scope - libcontainer container f7a49783a6b18c11dba61fa8272c3f2884e948bfce3d5d723233a334bd12080b. Jan 16 21:11:43.249786 kernel: kauditd_printk_skb: 30 callbacks suppressed Jan 16 21:11:43.250045 kernel: audit: type=1334 audit(1768597903.246:566): prog-id=167 op=LOAD Jan 16 21:11:43.246000 audit: BPF prog-id=167 op=LOAD Jan 16 21:11:43.246000 audit[3541]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3308 pid=3541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:43.256212 kernel: audit: type=1300 audit(1768597903.246:566): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3308 pid=3541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:43.246000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637613439373833613662313863313164626136316661383237326333 Jan 16 21:11:43.246000 audit: BPF prog-id=168 op=LOAD Jan 16 21:11:43.269221 kernel: audit: type=1327 audit(1768597903.246:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637613439373833613662313863313164626136316661383237326333 Jan 16 21:11:43.269360 kernel: audit: type=1334 audit(1768597903.246:567): prog-id=168 op=LOAD Jan 16 21:11:43.246000 audit[3541]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3308 pid=3541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:43.273101 kernel: audit: type=1300 audit(1768597903.246:567): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3308 pid=3541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:43.246000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637613439373833613662313863313164626136316661383237326333 Jan 16 21:11:43.286840 kernel: audit: type=1327 audit(1768597903.246:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637613439373833613662313863313164626136316661383237326333 Jan 16 21:11:43.246000 audit: BPF prog-id=168 op=UNLOAD Jan 16 21:11:43.291959 kernel: audit: type=1334 audit(1768597903.246:568): prog-id=168 op=UNLOAD Jan 16 21:11:43.292102 kernel: audit: type=1300 audit(1768597903.246:568): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3308 pid=3541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:43.246000 audit[3541]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3308 pid=3541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:43.300776 kernel: audit: type=1327 audit(1768597903.246:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637613439373833613662313863313164626136316661383237326333 Jan 16 21:11:43.246000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637613439373833613662313863313164626136316661383237326333 Jan 16 21:11:43.246000 audit: BPF prog-id=167 op=UNLOAD Jan 16 21:11:43.304352 kernel: audit: type=1334 audit(1768597903.246:569): prog-id=167 op=UNLOAD Jan 16 21:11:43.246000 audit[3541]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3308 pid=3541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:43.246000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637613439373833613662313863313164626136316661383237326333 Jan 16 21:11:43.246000 audit: BPF prog-id=169 op=LOAD Jan 16 21:11:43.246000 audit[3541]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3308 pid=3541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:43.246000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637613439373833613662313863313164626136316661383237326333 Jan 16 21:11:43.331615 containerd[1584]: time="2026-01-16T21:11:43.330724094Z" level=info msg="StartContainer for \"f7a49783a6b18c11dba61fa8272c3f2884e948bfce3d5d723233a334bd12080b\" returns successfully" Jan 16 21:11:43.543582 kubelet[2777]: E0116 21:11:43.542686 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqdjm" podUID="948ce78a-6a96-42a6-a8f0-360a2ec834df" Jan 16 21:11:43.789216 kubelet[2777]: E0116 21:11:43.788797 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:44.475320 systemd[1]: cri-containerd-f7a49783a6b18c11dba61fa8272c3f2884e948bfce3d5d723233a334bd12080b.scope: Deactivated successfully. Jan 16 21:11:44.475833 systemd[1]: cri-containerd-f7a49783a6b18c11dba61fa8272c3f2884e948bfce3d5d723233a334bd12080b.scope: Consumed 704ms CPU time, 168.8M memory peak, 4.9M read from disk, 171.3M written to disk. Jan 16 21:11:44.477000 audit: BPF prog-id=169 op=UNLOAD Jan 16 21:11:44.484115 containerd[1584]: time="2026-01-16T21:11:44.484042551Z" level=info msg="received container exit event container_id:\"f7a49783a6b18c11dba61fa8272c3f2884e948bfce3d5d723233a334bd12080b\" id:\"f7a49783a6b18c11dba61fa8272c3f2884e948bfce3d5d723233a334bd12080b\" pid:3552 exited_at:{seconds:1768597904 nanos:483174990}" Jan 16 21:11:44.586982 kubelet[2777]: I0116 21:11:44.586940 2777 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 16 21:11:44.629278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7a49783a6b18c11dba61fa8272c3f2884e948bfce3d5d723233a334bd12080b-rootfs.mount: Deactivated successfully. Jan 16 21:11:44.684237 kubelet[2777]: W0116 21:11:44.684186 2777 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4580.0.0-p-735bf5553b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4580.0.0-p-735bf5553b' and this object Jan 16 21:11:44.686783 kubelet[2777]: E0116 21:11:44.686194 2777 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4580.0.0-p-735bf5553b\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4580.0.0-p-735bf5553b' and this object" logger="UnhandledError" Jan 16 21:11:44.688665 kubelet[2777]: I0116 21:11:44.688548 2777 status_manager.go:890] "Failed to get status for pod" podUID="19564b0c-668b-4551-9f0e-2c1106af1e44" pod="kube-system/coredns-668d6bf9bc-wrnbt" err="pods \"coredns-668d6bf9bc-wrnbt\" is forbidden: User \"system:node:ci-4580.0.0-p-735bf5553b\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4580.0.0-p-735bf5553b' and this object" Jan 16 21:11:44.713897 systemd[1]: Created slice kubepods-burstable-pod19564b0c_668b_4551_9f0e_2c1106af1e44.slice - libcontainer container kubepods-burstable-pod19564b0c_668b_4551_9f0e_2c1106af1e44.slice. Jan 16 21:11:44.733700 systemd[1]: Created slice kubepods-besteffort-pod3c7766f8_3124_4eac_b0a1_e1f23a7c1e1f.slice - libcontainer container kubepods-besteffort-pod3c7766f8_3124_4eac_b0a1_e1f23a7c1e1f.slice. Jan 16 21:11:44.748679 systemd[1]: Created slice kubepods-besteffort-pod3ff9aa7f_1c05_4598_8de4_5a5a7bc4f529.slice - libcontainer container kubepods-besteffort-pod3ff9aa7f_1c05_4598_8de4_5a5a7bc4f529.slice. Jan 16 21:11:44.761469 systemd[1]: Created slice kubepods-besteffort-pod98509e25_dadb_4a53_8355_8cb0c0d71e14.slice - libcontainer container kubepods-besteffort-pod98509e25_dadb_4a53_8355_8cb0c0d71e14.slice. Jan 16 21:11:44.774299 systemd[1]: Created slice kubepods-burstable-podc84a3cf3_ba8e_4381_80a9_376236ad5286.slice - libcontainer container kubepods-burstable-podc84a3cf3_ba8e_4381_80a9_376236ad5286.slice. Jan 16 21:11:44.788201 kubelet[2777]: I0116 21:11:44.788152 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n4vf\" (UniqueName: \"kubernetes.io/projected/c84a3cf3-ba8e-4381-80a9-376236ad5286-kube-api-access-6n4vf\") pod \"coredns-668d6bf9bc-snhm5\" (UID: \"c84a3cf3-ba8e-4381-80a9-376236ad5286\") " pod="kube-system/coredns-668d6bf9bc-snhm5" Jan 16 21:11:44.788828 kubelet[2777]: I0116 21:11:44.788678 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb-whisker-backend-key-pair\") pod \"whisker-b5b7b79d9-7zpfd\" (UID: \"e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb\") " pod="calico-system/whisker-b5b7b79d9-7zpfd" Jan 16 21:11:44.789461 kubelet[2777]: I0116 21:11:44.789323 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md4wp\" (UniqueName: \"kubernetes.io/projected/e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb-kube-api-access-md4wp\") pod \"whisker-b5b7b79d9-7zpfd\" (UID: \"e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb\") " pod="calico-system/whisker-b5b7b79d9-7zpfd" Jan 16 21:11:44.790915 kubelet[2777]: I0116 21:11:44.789863 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a350016-b4b1-4c4d-a81e-6fa230a1b42f-tigera-ca-bundle\") pod \"calico-kube-controllers-67cf49786f-th49d\" (UID: \"4a350016-b4b1-4c4d-a81e-6fa230a1b42f\") " pod="calico-system/calico-kube-controllers-67cf49786f-th49d" Jan 16 21:11:44.790915 kubelet[2777]: I0116 21:11:44.789897 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwp8m\" (UniqueName: \"kubernetes.io/projected/3c7766f8-3124-4eac-b0a1-e1f23a7c1e1f-kube-api-access-zwp8m\") pod \"calico-apiserver-88cb9dd67-54lnr\" (UID: \"3c7766f8-3124-4eac-b0a1-e1f23a7c1e1f\") " pod="calico-apiserver/calico-apiserver-88cb9dd67-54lnr" Jan 16 21:11:44.790915 kubelet[2777]: I0116 21:11:44.789917 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19564b0c-668b-4551-9f0e-2c1106af1e44-config-volume\") pod \"coredns-668d6bf9bc-wrnbt\" (UID: \"19564b0c-668b-4551-9f0e-2c1106af1e44\") " pod="kube-system/coredns-668d6bf9bc-wrnbt" Jan 16 21:11:44.790915 kubelet[2777]: I0116 21:11:44.789935 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3c7766f8-3124-4eac-b0a1-e1f23a7c1e1f-calico-apiserver-certs\") pod \"calico-apiserver-88cb9dd67-54lnr\" (UID: \"3c7766f8-3124-4eac-b0a1-e1f23a7c1e1f\") " pod="calico-apiserver/calico-apiserver-88cb9dd67-54lnr" Jan 16 21:11:44.790915 kubelet[2777]: I0116 21:11:44.789964 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529-config\") pod \"goldmane-666569f655-zcnrm\" (UID: \"3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529\") " pod="calico-system/goldmane-666569f655-zcnrm" Jan 16 21:11:44.791131 kubelet[2777]: I0116 21:11:44.789987 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c84a3cf3-ba8e-4381-80a9-376236ad5286-config-volume\") pod \"coredns-668d6bf9bc-snhm5\" (UID: \"c84a3cf3-ba8e-4381-80a9-376236ad5286\") " pod="kube-system/coredns-668d6bf9bc-snhm5" Jan 16 21:11:44.791131 kubelet[2777]: I0116 21:11:44.790006 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb-whisker-ca-bundle\") pod \"whisker-b5b7b79d9-7zpfd\" (UID: \"e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb\") " pod="calico-system/whisker-b5b7b79d9-7zpfd" Jan 16 21:11:44.791131 kubelet[2777]: I0116 21:11:44.790022 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g75m\" (UniqueName: \"kubernetes.io/projected/3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529-kube-api-access-6g75m\") pod \"goldmane-666569f655-zcnrm\" (UID: \"3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529\") " pod="calico-system/goldmane-666569f655-zcnrm" Jan 16 21:11:44.791131 kubelet[2777]: I0116 21:11:44.790045 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkx7s\" (UniqueName: \"kubernetes.io/projected/19564b0c-668b-4551-9f0e-2c1106af1e44-kube-api-access-zkx7s\") pod \"coredns-668d6bf9bc-wrnbt\" (UID: \"19564b0c-668b-4551-9f0e-2c1106af1e44\") " pod="kube-system/coredns-668d6bf9bc-wrnbt" Jan 16 21:11:44.791131 kubelet[2777]: I0116 21:11:44.790063 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/98509e25-dadb-4a53-8355-8cb0c0d71e14-calico-apiserver-certs\") pod \"calico-apiserver-88cb9dd67-rsm8g\" (UID: \"98509e25-dadb-4a53-8355-8cb0c0d71e14\") " pod="calico-apiserver/calico-apiserver-88cb9dd67-rsm8g" Jan 16 21:11:44.791045 systemd[1]: Created slice kubepods-besteffort-pode108e12f_34f9_438f_ba2a_1fbfbc2d7fdb.slice - libcontainer container kubepods-besteffort-pode108e12f_34f9_438f_ba2a_1fbfbc2d7fdb.slice. Jan 16 21:11:44.791356 kubelet[2777]: I0116 21:11:44.790085 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529-goldmane-ca-bundle\") pod \"goldmane-666569f655-zcnrm\" (UID: \"3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529\") " pod="calico-system/goldmane-666569f655-zcnrm" Jan 16 21:11:44.791356 kubelet[2777]: I0116 21:11:44.790105 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzr5t\" (UniqueName: \"kubernetes.io/projected/4a350016-b4b1-4c4d-a81e-6fa230a1b42f-kube-api-access-mzr5t\") pod \"calico-kube-controllers-67cf49786f-th49d\" (UID: \"4a350016-b4b1-4c4d-a81e-6fa230a1b42f\") " pod="calico-system/calico-kube-controllers-67cf49786f-th49d" Jan 16 21:11:44.791356 kubelet[2777]: I0116 21:11:44.790140 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4h9g\" (UniqueName: \"kubernetes.io/projected/98509e25-dadb-4a53-8355-8cb0c0d71e14-kube-api-access-q4h9g\") pod \"calico-apiserver-88cb9dd67-rsm8g\" (UID: \"98509e25-dadb-4a53-8355-8cb0c0d71e14\") " pod="calico-apiserver/calico-apiserver-88cb9dd67-rsm8g" Jan 16 21:11:44.791356 kubelet[2777]: I0116 21:11:44.790164 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529-goldmane-key-pair\") pod \"goldmane-666569f655-zcnrm\" (UID: \"3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529\") " pod="calico-system/goldmane-666569f655-zcnrm" Jan 16 21:11:44.800910 systemd[1]: Created slice kubepods-besteffort-pod4a350016_b4b1_4c4d_a81e_6fa230a1b42f.slice - libcontainer container kubepods-besteffort-pod4a350016_b4b1_4c4d_a81e_6fa230a1b42f.slice. Jan 16 21:11:44.817415 kubelet[2777]: E0116 21:11:44.816006 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:44.823941 containerd[1584]: time="2026-01-16T21:11:44.823887652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 16 21:11:45.046168 containerd[1584]: time="2026-01-16T21:11:45.045109281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88cb9dd67-54lnr,Uid:3c7766f8-3124-4eac-b0a1-e1f23a7c1e1f,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:11:45.058260 containerd[1584]: time="2026-01-16T21:11:45.058185235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zcnrm,Uid:3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529,Namespace:calico-system,Attempt:0,}" Jan 16 21:11:45.086336 containerd[1584]: time="2026-01-16T21:11:45.085658924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88cb9dd67-rsm8g,Uid:98509e25-dadb-4a53-8355-8cb0c0d71e14,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:11:45.113050 containerd[1584]: time="2026-01-16T21:11:45.113001008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67cf49786f-th49d,Uid:4a350016-b4b1-4c4d-a81e-6fa230a1b42f,Namespace:calico-system,Attempt:0,}" Jan 16 21:11:45.116154 containerd[1584]: time="2026-01-16T21:11:45.116070739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b5b7b79d9-7zpfd,Uid:e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb,Namespace:calico-system,Attempt:0,}" Jan 16 21:11:45.386253 containerd[1584]: time="2026-01-16T21:11:45.386065397Z" level=error msg="Failed to destroy network for sandbox \"23c0ea982f6b461fcf8b7c706590f208c65283db75686302f4a1a2a7c11d378e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:11:45.387780 containerd[1584]: time="2026-01-16T21:11:45.386155484Z" level=error msg="Failed to destroy network for sandbox \"be897fc17fd693c36638f6c850713e3db1129caf2ce1e80c0c381360c7c15108\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:11:45.393595 containerd[1584]: time="2026-01-16T21:11:45.391710397Z" level=error msg="Failed to destroy network for sandbox \"64a957fff917698ce6b599e1bf00442412caeead9a023a09e542bcf16b4b1e62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:11:45.396541 containerd[1584]: time="2026-01-16T21:11:45.396301521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88cb9dd67-54lnr,Uid:3c7766f8-3124-4eac-b0a1-e1f23a7c1e1f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"23c0ea982f6b461fcf8b7c706590f208c65283db75686302f4a1a2a7c11d378e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:11:45.400525 containerd[1584]: time="2026-01-16T21:11:45.400205865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zcnrm,Uid:3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be897fc17fd693c36638f6c850713e3db1129caf2ce1e80c0c381360c7c15108\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:11:45.410964 kubelet[2777]: E0116 21:11:45.410880 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be897fc17fd693c36638f6c850713e3db1129caf2ce1e80c0c381360c7c15108\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:11:45.411860 kubelet[2777]: E0116 21:11:45.411097 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23c0ea982f6b461fcf8b7c706590f208c65283db75686302f4a1a2a7c11d378e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:11:45.411860 kubelet[2777]: E0116 21:11:45.411358 2777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be897fc17fd693c36638f6c850713e3db1129caf2ce1e80c0c381360c7c15108\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-zcnrm" Jan 16 21:11:45.411860 kubelet[2777]: E0116 21:11:45.411388 2777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23c0ea982f6b461fcf8b7c706590f208c65283db75686302f4a1a2a7c11d378e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-88cb9dd67-54lnr" Jan 16 21:11:45.411860 kubelet[2777]: E0116 21:11:45.411401 2777 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be897fc17fd693c36638f6c850713e3db1129caf2ce1e80c0c381360c7c15108\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-zcnrm" Jan 16 21:11:45.412368 kubelet[2777]: E0116 21:11:45.411420 2777 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23c0ea982f6b461fcf8b7c706590f208c65283db75686302f4a1a2a7c11d378e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-88cb9dd67-54lnr" Jan 16 21:11:45.412368 kubelet[2777]: E0116 21:11:45.411474 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-zcnrm_calico-system(3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-zcnrm_calico-system(3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be897fc17fd693c36638f6c850713e3db1129caf2ce1e80c0c381360c7c15108\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-zcnrm" podUID="3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529" Jan 16 21:11:45.412368 kubelet[2777]: E0116 21:11:45.411483 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-88cb9dd67-54lnr_calico-apiserver(3c7766f8-3124-4eac-b0a1-e1f23a7c1e1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-88cb9dd67-54lnr_calico-apiserver(3c7766f8-3124-4eac-b0a1-e1f23a7c1e1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23c0ea982f6b461fcf8b7c706590f208c65283db75686302f4a1a2a7c11d378e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-88cb9dd67-54lnr" podUID="3c7766f8-3124-4eac-b0a1-e1f23a7c1e1f" Jan 16 21:11:45.414375 containerd[1584]: time="2026-01-16T21:11:45.414293453Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67cf49786f-th49d,Uid:4a350016-b4b1-4c4d-a81e-6fa230a1b42f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"64a957fff917698ce6b599e1bf00442412caeead9a023a09e542bcf16b4b1e62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:11:45.415283 kubelet[2777]: E0116 21:11:45.414887 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64a957fff917698ce6b599e1bf00442412caeead9a023a09e542bcf16b4b1e62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:11:45.415424 kubelet[2777]: E0116 21:11:45.415315 2777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64a957fff917698ce6b599e1bf00442412caeead9a023a09e542bcf16b4b1e62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67cf49786f-th49d" Jan 16 21:11:45.415424 kubelet[2777]: E0116 21:11:45.415356 2777 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64a957fff917698ce6b599e1bf00442412caeead9a023a09e542bcf16b4b1e62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67cf49786f-th49d" Jan 16 21:11:45.415424 kubelet[2777]: E0116 21:11:45.415414 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67cf49786f-th49d_calico-system(4a350016-b4b1-4c4d-a81e-6fa230a1b42f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67cf49786f-th49d_calico-system(4a350016-b4b1-4c4d-a81e-6fa230a1b42f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64a957fff917698ce6b599e1bf00442412caeead9a023a09e542bcf16b4b1e62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67cf49786f-th49d" podUID="4a350016-b4b1-4c4d-a81e-6fa230a1b42f" Jan 16 21:11:45.416576 containerd[1584]: time="2026-01-16T21:11:45.416516410Z" level=error msg="Failed to destroy network for sandbox \"a4f092085a4da24ad1ee0d74e7045d8fe265717514c1e165a38602e7fbdab3b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:11:45.421242 containerd[1584]: time="2026-01-16T21:11:45.420482339Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88cb9dd67-rsm8g,Uid:98509e25-dadb-4a53-8355-8cb0c0d71e14,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4f092085a4da24ad1ee0d74e7045d8fe265717514c1e165a38602e7fbdab3b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:11:45.421543 kubelet[2777]: E0116 21:11:45.421346 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4f092085a4da24ad1ee0d74e7045d8fe265717514c1e165a38602e7fbdab3b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:11:45.421543 kubelet[2777]: E0116 21:11:45.421408 2777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4f092085a4da24ad1ee0d74e7045d8fe265717514c1e165a38602e7fbdab3b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-88cb9dd67-rsm8g" Jan 16 21:11:45.421543 kubelet[2777]: E0116 21:11:45.421455 2777 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4f092085a4da24ad1ee0d74e7045d8fe265717514c1e165a38602e7fbdab3b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-88cb9dd67-rsm8g" Jan 16 21:11:45.422966 kubelet[2777]: E0116 21:11:45.421513 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-88cb9dd67-rsm8g_calico-apiserver(98509e25-dadb-4a53-8355-8cb0c0d71e14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-88cb9dd67-rsm8g_calico-apiserver(98509e25-dadb-4a53-8355-8cb0c0d71e14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4f092085a4da24ad1ee0d74e7045d8fe265717514c1e165a38602e7fbdab3b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-88cb9dd67-rsm8g" podUID="98509e25-dadb-4a53-8355-8cb0c0d71e14" Jan 16 21:11:45.450105 containerd[1584]: time="2026-01-16T21:11:45.450037185Z" level=error msg="Failed to destroy network for sandbox \"0b8fab1565685cff6798c903420d3f0977e78801185c10c18824f4e886584e6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:11:45.453330 containerd[1584]: time="2026-01-16T21:11:45.453232494Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b5b7b79d9-7zpfd,Uid:e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b8fab1565685cff6798c903420d3f0977e78801185c10c18824f4e886584e6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:11:45.455032 kubelet[2777]: E0116 21:11:45.454362 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b8fab1565685cff6798c903420d3f0977e78801185c10c18824f4e886584e6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:11:45.455032 kubelet[2777]: E0116 21:11:45.454489 2777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b8fab1565685cff6798c903420d3f0977e78801185c10c18824f4e886584e6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b5b7b79d9-7zpfd" Jan 16 21:11:45.455032 kubelet[2777]: E0116 21:11:45.454529 2777 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b8fab1565685cff6798c903420d3f0977e78801185c10c18824f4e886584e6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b5b7b79d9-7zpfd" Jan 16 21:11:45.455258 kubelet[2777]: E0116 21:11:45.454578 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b5b7b79d9-7zpfd_calico-system(e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b5b7b79d9-7zpfd_calico-system(e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b8fab1565685cff6798c903420d3f0977e78801185c10c18824f4e886584e6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b5b7b79d9-7zpfd" podUID="e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb" Jan 16 21:11:45.538910 systemd[1]: Created slice kubepods-besteffort-pod948ce78a_6a96_42a6_a8f0_360a2ec834df.slice - libcontainer container kubepods-besteffort-pod948ce78a_6a96_42a6_a8f0_360a2ec834df.slice. Jan 16 21:11:45.544075 containerd[1584]: time="2026-01-16T21:11:45.543783194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fqdjm,Uid:948ce78a-6a96-42a6-a8f0-360a2ec834df,Namespace:calico-system,Attempt:0,}" Jan 16 21:11:45.629161 containerd[1584]: time="2026-01-16T21:11:45.628996145Z" level=error msg="Failed to destroy network for sandbox \"633791a311725b4a220dde34fde07e997c6b9d89a0e01a290c140385972a80ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:11:45.632763 containerd[1584]: time="2026-01-16T21:11:45.632562599Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fqdjm,Uid:948ce78a-6a96-42a6-a8f0-360a2ec834df,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"633791a311725b4a220dde34fde07e997c6b9d89a0e01a290c140385972a80ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:11:45.634262 kubelet[2777]: E0116 21:11:45.634170 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"633791a311725b4a220dde34fde07e997c6b9d89a0e01a290c140385972a80ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:11:45.634262 kubelet[2777]: E0116 21:11:45.634259 2777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"633791a311725b4a220dde34fde07e997c6b9d89a0e01a290c140385972a80ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fqdjm" Jan 16 21:11:45.635151 kubelet[2777]: E0116 21:11:45.634291 2777 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"633791a311725b4a220dde34fde07e997c6b9d89a0e01a290c140385972a80ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fqdjm" Jan 16 21:11:45.635151 kubelet[2777]: E0116 21:11:45.634341 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fqdjm_calico-system(948ce78a-6a96-42a6-a8f0-360a2ec834df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fqdjm_calico-system(948ce78a-6a96-42a6-a8f0-360a2ec834df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"633791a311725b4a220dde34fde07e997c6b9d89a0e01a290c140385972a80ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fqdjm" podUID="948ce78a-6a96-42a6-a8f0-360a2ec834df" Jan 16 21:11:45.653562 systemd[1]: run-netns-cni\x2de5030752\x2d03cf\x2dabd6\x2d49b8\x2df14cbf2d3cad.mount: Deactivated successfully. Jan 16 21:11:45.893932 kubelet[2777]: E0116 21:11:45.893383 2777 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 16 21:11:45.893932 kubelet[2777]: E0116 21:11:45.893542 2777 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c84a3cf3-ba8e-4381-80a9-376236ad5286-config-volume podName:c84a3cf3-ba8e-4381-80a9-376236ad5286 nodeName:}" failed. No retries permitted until 2026-01-16 21:11:46.393518841 +0000 UTC m=+38.124490339 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c84a3cf3-ba8e-4381-80a9-376236ad5286-config-volume") pod "coredns-668d6bf9bc-snhm5" (UID: "c84a3cf3-ba8e-4381-80a9-376236ad5286") : failed to sync configmap cache: timed out waiting for the condition Jan 16 21:11:45.893932 kubelet[2777]: E0116 21:11:45.893865 2777 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 16 21:11:45.894218 kubelet[2777]: E0116 21:11:45.893951 2777 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/19564b0c-668b-4551-9f0e-2c1106af1e44-config-volume podName:19564b0c-668b-4551-9f0e-2c1106af1e44 nodeName:}" failed. No retries permitted until 2026-01-16 21:11:46.393933972 +0000 UTC m=+38.124905456 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/19564b0c-668b-4551-9f0e-2c1106af1e44-config-volume") pod "coredns-668d6bf9bc-wrnbt" (UID: "19564b0c-668b-4551-9f0e-2c1106af1e44") : failed to sync configmap cache: timed out waiting for the condition Jan 16 21:11:46.524654 kubelet[2777]: E0116 21:11:46.524167 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:46.529748 containerd[1584]: time="2026-01-16T21:11:46.529681326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wrnbt,Uid:19564b0c-668b-4551-9f0e-2c1106af1e44,Namespace:kube-system,Attempt:0,}" Jan 16 21:11:46.584289 kubelet[2777]: E0116 21:11:46.583597 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:46.587599 containerd[1584]: time="2026-01-16T21:11:46.587111204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-snhm5,Uid:c84a3cf3-ba8e-4381-80a9-376236ad5286,Namespace:kube-system,Attempt:0,}" Jan 16 21:11:46.672492 containerd[1584]: time="2026-01-16T21:11:46.672408903Z" level=error msg="Failed to destroy network for sandbox \"fa9f4f056e5d3cce0e8ca9ac06feb634e65988e53ce9e4336905cf0777dac4d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:11:46.676885 systemd[1]: run-netns-cni\x2d7b0512af\x2db91c\x2db8a6\x2dcb57\x2d735b27094d0e.mount: Deactivated successfully. Jan 16 21:11:46.730552 containerd[1584]: time="2026-01-16T21:11:46.730096873Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wrnbt,Uid:19564b0c-668b-4551-9f0e-2c1106af1e44,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa9f4f056e5d3cce0e8ca9ac06feb634e65988e53ce9e4336905cf0777dac4d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:11:46.734554 kubelet[2777]: E0116 21:11:46.733204 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa9f4f056e5d3cce0e8ca9ac06feb634e65988e53ce9e4336905cf0777dac4d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:11:46.734554 kubelet[2777]: E0116 21:11:46.733277 2777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa9f4f056e5d3cce0e8ca9ac06feb634e65988e53ce9e4336905cf0777dac4d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wrnbt" Jan 16 21:11:46.734554 kubelet[2777]: E0116 21:11:46.733318 2777 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa9f4f056e5d3cce0e8ca9ac06feb634e65988e53ce9e4336905cf0777dac4d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wrnbt" Jan 16 21:11:46.738342 kubelet[2777]: E0116 21:11:46.733405 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wrnbt_kube-system(19564b0c-668b-4551-9f0e-2c1106af1e44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wrnbt_kube-system(19564b0c-668b-4551-9f0e-2c1106af1e44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa9f4f056e5d3cce0e8ca9ac06feb634e65988e53ce9e4336905cf0777dac4d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wrnbt" podUID="19564b0c-668b-4551-9f0e-2c1106af1e44" Jan 16 21:11:46.797876 containerd[1584]: time="2026-01-16T21:11:46.795554790Z" level=error msg="Failed to destroy network for sandbox \"3ff678f586e3f28b054f51c273a08cea413d3d59b20a4b172e7f2a28a5c51a64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:11:46.800952 containerd[1584]: time="2026-01-16T21:11:46.800859531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-snhm5,Uid:c84a3cf3-ba8e-4381-80a9-376236ad5286,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ff678f586e3f28b054f51c273a08cea413d3d59b20a4b172e7f2a28a5c51a64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:11:46.806955 kubelet[2777]: E0116 21:11:46.801185 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ff678f586e3f28b054f51c273a08cea413d3d59b20a4b172e7f2a28a5c51a64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:11:46.806955 kubelet[2777]: E0116 21:11:46.801290 2777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ff678f586e3f28b054f51c273a08cea413d3d59b20a4b172e7f2a28a5c51a64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-snhm5" Jan 16 21:11:46.806955 kubelet[2777]: E0116 21:11:46.801333 2777 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ff678f586e3f28b054f51c273a08cea413d3d59b20a4b172e7f2a28a5c51a64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-snhm5" Jan 16 21:11:46.803250 systemd[1]: run-netns-cni\x2d09d0f4d5\x2db637\x2dcb40\x2d0614\x2d1afe5e5a7cd6.mount: Deactivated successfully. Jan 16 21:11:46.807357 kubelet[2777]: E0116 21:11:46.801400 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-snhm5_kube-system(c84a3cf3-ba8e-4381-80a9-376236ad5286)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-snhm5_kube-system(c84a3cf3-ba8e-4381-80a9-376236ad5286)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ff678f586e3f28b054f51c273a08cea413d3d59b20a4b172e7f2a28a5c51a64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-snhm5" podUID="c84a3cf3-ba8e-4381-80a9-376236ad5286" Jan 16 21:11:53.345094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3243253218.mount: Deactivated successfully. Jan 16 21:11:53.392933 containerd[1584]: time="2026-01-16T21:11:53.392859000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:11:53.411564 containerd[1584]: time="2026-01-16T21:11:53.411090989Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:11:53.415396 containerd[1584]: time="2026-01-16T21:11:53.415263670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 16 21:11:53.415803 containerd[1584]: time="2026-01-16T21:11:53.415749654Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.591558903s" Jan 16 21:11:53.418245 containerd[1584]: time="2026-01-16T21:11:53.417513759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:11:53.426082 containerd[1584]: time="2026-01-16T21:11:53.425870088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 16 21:11:53.460067 containerd[1584]: time="2026-01-16T21:11:53.459952024Z" level=info msg="CreateContainer within sandbox \"06437600ff3ddf1cecd53c58b50e40b56d1554a780972e43d604c75923e8dca0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 16 21:11:53.530001 containerd[1584]: time="2026-01-16T21:11:53.529946893Z" level=info msg="Container 41e93ab6dba420d5a21ddc13b3faaae270bf9e6bbfacf1a0547ed07e9f8f3a60: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:11:53.531165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2415329787.mount: Deactivated successfully. Jan 16 21:11:53.585745 containerd[1584]: time="2026-01-16T21:11:53.585658913Z" level=info msg="CreateContainer within sandbox \"06437600ff3ddf1cecd53c58b50e40b56d1554a780972e43d604c75923e8dca0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"41e93ab6dba420d5a21ddc13b3faaae270bf9e6bbfacf1a0547ed07e9f8f3a60\"" Jan 16 21:11:53.587877 containerd[1584]: time="2026-01-16T21:11:53.586912656Z" level=info msg="StartContainer for \"41e93ab6dba420d5a21ddc13b3faaae270bf9e6bbfacf1a0547ed07e9f8f3a60\"" Jan 16 21:11:53.596712 containerd[1584]: time="2026-01-16T21:11:53.596599929Z" level=info msg="connecting to shim 41e93ab6dba420d5a21ddc13b3faaae270bf9e6bbfacf1a0547ed07e9f8f3a60" address="unix:///run/containerd/s/717ef4aa002f06ed68381b1b6f44c295c83bfc03587da8acd5a575450081678f" protocol=ttrpc version=3 Jan 16 21:11:53.652129 systemd[1]: Started cri-containerd-41e93ab6dba420d5a21ddc13b3faaae270bf9e6bbfacf1a0547ed07e9f8f3a60.scope - libcontainer container 41e93ab6dba420d5a21ddc13b3faaae270bf9e6bbfacf1a0547ed07e9f8f3a60. Jan 16 21:11:53.742200 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 16 21:11:53.743021 kernel: audit: type=1334 audit(1768597913.735:572): prog-id=170 op=LOAD Jan 16 21:11:53.735000 audit: BPF prog-id=170 op=LOAD Jan 16 21:11:53.735000 audit[3813]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3308 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:53.755773 kernel: audit: type=1300 audit(1768597913.735:572): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3308 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:53.756203 kernel: audit: type=1327 audit(1768597913.735:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431653933616236646261343230643561323164646331336233666161 Jan 16 21:11:53.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431653933616236646261343230643561323164646331336233666161 Jan 16 21:11:53.735000 audit: BPF prog-id=171 op=LOAD Jan 16 21:11:53.735000 audit[3813]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3308 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:53.770900 kernel: audit: type=1334 audit(1768597913.735:573): prog-id=171 op=LOAD Jan 16 21:11:53.771049 kernel: audit: type=1300 audit(1768597913.735:573): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3308 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:53.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431653933616236646261343230643561323164646331336233666161 Jan 16 21:11:53.778503 kernel: audit: type=1327 audit(1768597913.735:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431653933616236646261343230643561323164646331336233666161 Jan 16 21:11:53.735000 audit: BPF prog-id=171 op=UNLOAD Jan 16 21:11:53.784619 kernel: audit: type=1334 audit(1768597913.735:574): prog-id=171 op=UNLOAD Jan 16 21:11:53.735000 audit[3813]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3308 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:53.803055 kernel: audit: type=1300 audit(1768597913.735:574): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3308 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:53.803193 kernel: audit: type=1327 audit(1768597913.735:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431653933616236646261343230643561323164646331336233666161 Jan 16 21:11:53.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431653933616236646261343230643561323164646331336233666161 Jan 16 21:11:53.735000 audit: BPF prog-id=170 op=UNLOAD Jan 16 21:11:53.809854 kernel: audit: type=1334 audit(1768597913.735:575): prog-id=170 op=UNLOAD Jan 16 21:11:53.735000 audit[3813]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3308 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:53.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431653933616236646261343230643561323164646331336233666161 Jan 16 21:11:53.735000 audit: BPF prog-id=172 op=LOAD Jan 16 21:11:53.735000 audit[3813]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3308 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:53.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431653933616236646261343230643561323164646331336233666161 Jan 16 21:11:53.825680 containerd[1584]: time="2026-01-16T21:11:53.825623056Z" level=info msg="StartContainer for \"41e93ab6dba420d5a21ddc13b3faaae270bf9e6bbfacf1a0547ed07e9f8f3a60\" returns successfully" Jan 16 21:11:53.879620 kubelet[2777]: E0116 21:11:53.879574 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:54.050050 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 16 21:11:54.050818 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 16 21:11:54.383783 kubelet[2777]: I0116 21:11:54.383575 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wctp9" podStartSLOduration=2.471408246 podStartE2EDuration="21.383544274s" podCreationTimestamp="2026-01-16 21:11:33 +0000 UTC" firstStartedPulling="2026-01-16 21:11:34.514664108 +0000 UTC m=+26.245635594" lastFinishedPulling="2026-01-16 21:11:53.426800122 +0000 UTC m=+45.157771622" observedRunningTime="2026-01-16 21:11:53.921963588 +0000 UTC m=+45.652935096" watchObservedRunningTime="2026-01-16 21:11:54.383544274 +0000 UTC m=+46.114515787" Jan 16 21:11:54.472052 kubelet[2777]: I0116 21:11:54.471981 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb-whisker-ca-bundle\") pod \"e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb\" (UID: \"e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb\") " Jan 16 21:11:54.472052 kubelet[2777]: I0116 21:11:54.472068 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb-whisker-backend-key-pair\") pod \"e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb\" (UID: \"e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb\") " Jan 16 21:11:54.472052 kubelet[2777]: I0116 21:11:54.472089 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md4wp\" (UniqueName: \"kubernetes.io/projected/e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb-kube-api-access-md4wp\") pod \"e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb\" (UID: \"e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb\") " Jan 16 21:11:54.473582 kubelet[2777]: I0116 21:11:54.473520 2777 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb" (UID: "e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 16 21:11:54.482567 systemd[1]: var-lib-kubelet-pods-e108e12f\x2d34f9\x2d438f\x2dba2a\x2d1fbfbc2d7fdb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmd4wp.mount: Deactivated successfully. Jan 16 21:11:54.483030 kubelet[2777]: I0116 21:11:54.482720 2777 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb-kube-api-access-md4wp" (OuterVolumeSpecName: "kube-api-access-md4wp") pod "e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb" (UID: "e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb"). InnerVolumeSpecName "kube-api-access-md4wp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 16 21:11:54.488385 kubelet[2777]: I0116 21:11:54.488311 2777 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb" (UID: "e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 16 21:11:54.489422 systemd[1]: var-lib-kubelet-pods-e108e12f\x2d34f9\x2d438f\x2dba2a\x2d1fbfbc2d7fdb-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 16 21:11:54.550390 systemd[1]: Removed slice kubepods-besteffort-pode108e12f_34f9_438f_ba2a_1fbfbc2d7fdb.slice - libcontainer container kubepods-besteffort-pode108e12f_34f9_438f_ba2a_1fbfbc2d7fdb.slice. Jan 16 21:11:54.573067 kubelet[2777]: I0116 21:11:54.573003 2777 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb-whisker-ca-bundle\") on node \"ci-4580.0.0-p-735bf5553b\" DevicePath \"\"" Jan 16 21:11:54.573067 kubelet[2777]: I0116 21:11:54.573059 2777 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb-whisker-backend-key-pair\") on node \"ci-4580.0.0-p-735bf5553b\" DevicePath \"\"" Jan 16 21:11:54.573067 kubelet[2777]: I0116 21:11:54.573078 2777 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-md4wp\" (UniqueName: \"kubernetes.io/projected/e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb-kube-api-access-md4wp\") on node \"ci-4580.0.0-p-735bf5553b\" DevicePath \"\"" Jan 16 21:11:54.882615 kubelet[2777]: E0116 21:11:54.882571 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:55.007421 systemd[1]: Created slice kubepods-besteffort-pod23273946_e0fa_48ec_9cd8_e5c3fd7332d3.slice - libcontainer container kubepods-besteffort-pod23273946_e0fa_48ec_9cd8_e5c3fd7332d3.slice. Jan 16 21:11:55.077359 kubelet[2777]: I0116 21:11:55.077311 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvb9b\" (UniqueName: \"kubernetes.io/projected/23273946-e0fa-48ec-9cd8-e5c3fd7332d3-kube-api-access-wvb9b\") pod \"whisker-f8d65f9d5-lfhf5\" (UID: \"23273946-e0fa-48ec-9cd8-e5c3fd7332d3\") " pod="calico-system/whisker-f8d65f9d5-lfhf5" Jan 16 21:11:55.077595 kubelet[2777]: I0116 21:11:55.077361 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/23273946-e0fa-48ec-9cd8-e5c3fd7332d3-whisker-backend-key-pair\") pod \"whisker-f8d65f9d5-lfhf5\" (UID: \"23273946-e0fa-48ec-9cd8-e5c3fd7332d3\") " pod="calico-system/whisker-f8d65f9d5-lfhf5" Jan 16 21:11:55.077595 kubelet[2777]: I0116 21:11:55.077498 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23273946-e0fa-48ec-9cd8-e5c3fd7332d3-whisker-ca-bundle\") pod \"whisker-f8d65f9d5-lfhf5\" (UID: \"23273946-e0fa-48ec-9cd8-e5c3fd7332d3\") " pod="calico-system/whisker-f8d65f9d5-lfhf5" Jan 16 21:11:55.315719 containerd[1584]: time="2026-01-16T21:11:55.315643577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f8d65f9d5-lfhf5,Uid:23273946-e0fa-48ec-9cd8-e5c3fd7332d3,Namespace:calico-system,Attempt:0,}" Jan 16 21:11:55.625475 systemd-networkd[1484]: cali5af656c0c98: Link UP Jan 16 21:11:55.625846 systemd-networkd[1484]: cali5af656c0c98: Gained carrier Jan 16 21:11:55.668305 containerd[1584]: 2026-01-16 21:11:55.356 [INFO][3929] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 21:11:55.668305 containerd[1584]: 2026-01-16 21:11:55.391 [INFO][3929] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580.0.0--p--735bf5553b-k8s-whisker--f8d65f9d5--lfhf5-eth0 whisker-f8d65f9d5- calico-system 23273946-e0fa-48ec-9cd8-e5c3fd7332d3 919 0 2026-01-16 21:11:54 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:f8d65f9d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4580.0.0-p-735bf5553b whisker-f8d65f9d5-lfhf5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5af656c0c98 [] [] }} ContainerID="7dbcee064a3befa6488e1c503f32be84e1ebcc76b79f55fa3bd5b1805c4ca6f2" Namespace="calico-system" Pod="whisker-f8d65f9d5-lfhf5" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-whisker--f8d65f9d5--lfhf5-" Jan 16 21:11:55.668305 containerd[1584]: 2026-01-16 21:11:55.391 [INFO][3929] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7dbcee064a3befa6488e1c503f32be84e1ebcc76b79f55fa3bd5b1805c4ca6f2" Namespace="calico-system" Pod="whisker-f8d65f9d5-lfhf5" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-whisker--f8d65f9d5--lfhf5-eth0" Jan 16 21:11:55.668305 containerd[1584]: 2026-01-16 21:11:55.524 [INFO][3940] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7dbcee064a3befa6488e1c503f32be84e1ebcc76b79f55fa3bd5b1805c4ca6f2" HandleID="k8s-pod-network.7dbcee064a3befa6488e1c503f32be84e1ebcc76b79f55fa3bd5b1805c4ca6f2" Workload="ci--4580.0.0--p--735bf5553b-k8s-whisker--f8d65f9d5--lfhf5-eth0" Jan 16 21:11:55.668669 containerd[1584]: 2026-01-16 21:11:55.526 [INFO][3940] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7dbcee064a3befa6488e1c503f32be84e1ebcc76b79f55fa3bd5b1805c4ca6f2" HandleID="k8s-pod-network.7dbcee064a3befa6488e1c503f32be84e1ebcc76b79f55fa3bd5b1805c4ca6f2" Workload="ci--4580.0.0--p--735bf5553b-k8s-whisker--f8d65f9d5--lfhf5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001236f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4580.0.0-p-735bf5553b", "pod":"whisker-f8d65f9d5-lfhf5", "timestamp":"2026-01-16 21:11:55.52435548 +0000 UTC"}, Hostname:"ci-4580.0.0-p-735bf5553b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:11:55.668669 containerd[1584]: 2026-01-16 21:11:55.526 [INFO][3940] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:11:55.668669 containerd[1584]: 2026-01-16 21:11:55.526 [INFO][3940] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:11:55.668669 containerd[1584]: 2026-01-16 21:11:55.527 [INFO][3940] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580.0.0-p-735bf5553b' Jan 16 21:11:55.668669 containerd[1584]: 2026-01-16 21:11:55.544 [INFO][3940] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7dbcee064a3befa6488e1c503f32be84e1ebcc76b79f55fa3bd5b1805c4ca6f2" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:55.668669 containerd[1584]: 2026-01-16 21:11:55.556 [INFO][3940] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:55.668669 containerd[1584]: 2026-01-16 21:11:55.565 [INFO][3940] ipam/ipam.go 511: Trying affinity for 192.168.29.192/26 host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:55.668669 containerd[1584]: 2026-01-16 21:11:55.568 [INFO][3940] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.192/26 host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:55.668669 containerd[1584]: 2026-01-16 21:11:55.572 [INFO][3940] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.192/26 host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:55.670648 containerd[1584]: 2026-01-16 21:11:55.572 [INFO][3940] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.29.192/26 handle="k8s-pod-network.7dbcee064a3befa6488e1c503f32be84e1ebcc76b79f55fa3bd5b1805c4ca6f2" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:55.670648 containerd[1584]: 2026-01-16 21:11:55.575 [INFO][3940] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7dbcee064a3befa6488e1c503f32be84e1ebcc76b79f55fa3bd5b1805c4ca6f2 Jan 16 21:11:55.670648 containerd[1584]: 2026-01-16 21:11:55.583 [INFO][3940] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.29.192/26 handle="k8s-pod-network.7dbcee064a3befa6488e1c503f32be84e1ebcc76b79f55fa3bd5b1805c4ca6f2" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:55.670648 containerd[1584]: 2026-01-16 21:11:55.593 [INFO][3940] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.29.193/26] block=192.168.29.192/26 handle="k8s-pod-network.7dbcee064a3befa6488e1c503f32be84e1ebcc76b79f55fa3bd5b1805c4ca6f2" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:55.670648 containerd[1584]: 2026-01-16 21:11:55.594 [INFO][3940] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.193/26] handle="k8s-pod-network.7dbcee064a3befa6488e1c503f32be84e1ebcc76b79f55fa3bd5b1805c4ca6f2" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:55.670648 containerd[1584]: 2026-01-16 21:11:55.595 [INFO][3940] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:11:55.670648 containerd[1584]: 2026-01-16 21:11:55.595 [INFO][3940] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.29.193/26] IPv6=[] ContainerID="7dbcee064a3befa6488e1c503f32be84e1ebcc76b79f55fa3bd5b1805c4ca6f2" HandleID="k8s-pod-network.7dbcee064a3befa6488e1c503f32be84e1ebcc76b79f55fa3bd5b1805c4ca6f2" Workload="ci--4580.0.0--p--735bf5553b-k8s-whisker--f8d65f9d5--lfhf5-eth0" Jan 16 21:11:55.671039 containerd[1584]: 2026-01-16 21:11:55.600 [INFO][3929] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7dbcee064a3befa6488e1c503f32be84e1ebcc76b79f55fa3bd5b1805c4ca6f2" Namespace="calico-system" Pod="whisker-f8d65f9d5-lfhf5" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-whisker--f8d65f9d5--lfhf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--735bf5553b-k8s-whisker--f8d65f9d5--lfhf5-eth0", GenerateName:"whisker-f8d65f9d5-", Namespace:"calico-system", SelfLink:"", UID:"23273946-e0fa-48ec-9cd8-e5c3fd7332d3", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 11, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f8d65f9d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-735bf5553b", ContainerID:"", Pod:"whisker-f8d65f9d5-lfhf5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.29.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5af656c0c98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:11:55.671039 containerd[1584]: 2026-01-16 21:11:55.600 [INFO][3929] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.193/32] ContainerID="7dbcee064a3befa6488e1c503f32be84e1ebcc76b79f55fa3bd5b1805c4ca6f2" Namespace="calico-system" Pod="whisker-f8d65f9d5-lfhf5" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-whisker--f8d65f9d5--lfhf5-eth0" Jan 16 21:11:55.671271 containerd[1584]: 2026-01-16 21:11:55.600 [INFO][3929] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5af656c0c98 ContainerID="7dbcee064a3befa6488e1c503f32be84e1ebcc76b79f55fa3bd5b1805c4ca6f2" Namespace="calico-system" Pod="whisker-f8d65f9d5-lfhf5" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-whisker--f8d65f9d5--lfhf5-eth0" Jan 16 21:11:55.671271 containerd[1584]: 2026-01-16 21:11:55.629 [INFO][3929] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7dbcee064a3befa6488e1c503f32be84e1ebcc76b79f55fa3bd5b1805c4ca6f2" Namespace="calico-system" Pod="whisker-f8d65f9d5-lfhf5" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-whisker--f8d65f9d5--lfhf5-eth0" Jan 16 21:11:55.671380 containerd[1584]: 2026-01-16 21:11:55.630 [INFO][3929] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7dbcee064a3befa6488e1c503f32be84e1ebcc76b79f55fa3bd5b1805c4ca6f2" Namespace="calico-system" Pod="whisker-f8d65f9d5-lfhf5" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-whisker--f8d65f9d5--lfhf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--735bf5553b-k8s-whisker--f8d65f9d5--lfhf5-eth0", GenerateName:"whisker-f8d65f9d5-", Namespace:"calico-system", SelfLink:"", UID:"23273946-e0fa-48ec-9cd8-e5c3fd7332d3", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 11, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f8d65f9d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-735bf5553b", ContainerID:"7dbcee064a3befa6488e1c503f32be84e1ebcc76b79f55fa3bd5b1805c4ca6f2", Pod:"whisker-f8d65f9d5-lfhf5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.29.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5af656c0c98", MAC:"22:3a:1d:f6:5e:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:11:55.671529 containerd[1584]: 2026-01-16 21:11:55.654 [INFO][3929] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7dbcee064a3befa6488e1c503f32be84e1ebcc76b79f55fa3bd5b1805c4ca6f2" Namespace="calico-system" Pod="whisker-f8d65f9d5-lfhf5" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-whisker--f8d65f9d5--lfhf5-eth0" Jan 16 21:11:56.017055 containerd[1584]: time="2026-01-16T21:11:56.016950531Z" level=info msg="connecting to shim 7dbcee064a3befa6488e1c503f32be84e1ebcc76b79f55fa3bd5b1805c4ca6f2" address="unix:///run/containerd/s/9fcba487e022cfe2861e93a1ac7efe3c0dbf404165341725968235760e0d8788" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:11:56.121159 systemd[1]: Started cri-containerd-7dbcee064a3befa6488e1c503f32be84e1ebcc76b79f55fa3bd5b1805c4ca6f2.scope - libcontainer container 7dbcee064a3befa6488e1c503f32be84e1ebcc76b79f55fa3bd5b1805c4ca6f2. Jan 16 21:11:56.156000 audit: BPF prog-id=173 op=LOAD Jan 16 21:11:56.158000 audit: BPF prog-id=174 op=LOAD Jan 16 21:11:56.158000 audit[4060]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4048 pid=4060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:56.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764626365653036346133626566613634383865316335303366333262 Jan 16 21:11:56.158000 audit: BPF prog-id=174 op=UNLOAD Jan 16 21:11:56.158000 audit[4060]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4048 pid=4060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:56.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764626365653036346133626566613634383865316335303366333262 Jan 16 21:11:56.158000 audit: BPF prog-id=175 op=LOAD Jan 16 21:11:56.158000 audit[4060]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4048 pid=4060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:56.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764626365653036346133626566613634383865316335303366333262 Jan 16 21:11:56.158000 audit: BPF prog-id=176 op=LOAD Jan 16 21:11:56.158000 audit[4060]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4048 pid=4060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:56.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764626365653036346133626566613634383865316335303366333262 Jan 16 21:11:56.158000 audit: BPF prog-id=176 op=UNLOAD Jan 16 21:11:56.158000 audit[4060]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4048 pid=4060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:56.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764626365653036346133626566613634383865316335303366333262 Jan 16 21:11:56.158000 audit: BPF prog-id=175 op=UNLOAD Jan 16 21:11:56.158000 audit[4060]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4048 pid=4060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:56.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764626365653036346133626566613634383865316335303366333262 Jan 16 21:11:56.158000 audit: BPF prog-id=177 op=LOAD Jan 16 21:11:56.158000 audit[4060]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4048 pid=4060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:56.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764626365653036346133626566613634383865316335303366333262 Jan 16 21:11:56.277615 containerd[1584]: time="2026-01-16T21:11:56.277233588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f8d65f9d5-lfhf5,Uid:23273946-e0fa-48ec-9cd8-e5c3fd7332d3,Namespace:calico-system,Attempt:0,} returns sandbox id \"7dbcee064a3befa6488e1c503f32be84e1ebcc76b79f55fa3bd5b1805c4ca6f2\"" Jan 16 21:11:56.282158 containerd[1584]: time="2026-01-16T21:11:56.282111281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 21:11:56.533070 containerd[1584]: time="2026-01-16T21:11:56.532786321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88cb9dd67-rsm8g,Uid:98509e25-dadb-4a53-8355-8cb0c0d71e14,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:11:56.533723 containerd[1584]: time="2026-01-16T21:11:56.533679232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67cf49786f-th49d,Uid:4a350016-b4b1-4c4d-a81e-6fa230a1b42f,Namespace:calico-system,Attempt:0,}" Jan 16 21:11:56.535100 kubelet[2777]: I0116 21:11:56.535005 2777 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb" path="/var/lib/kubelet/pods/e108e12f-34f9-438f-ba2a-1fbfbc2d7fdb/volumes" Jan 16 21:11:56.655022 containerd[1584]: time="2026-01-16T21:11:56.654961803Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:11:56.656048 containerd[1584]: time="2026-01-16T21:11:56.655985208Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 21:11:56.656176 containerd[1584]: time="2026-01-16T21:11:56.656139333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 21:11:56.656943 kubelet[2777]: E0116 21:11:56.656786 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:11:56.656943 kubelet[2777]: E0116 21:11:56.656887 2777 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:11:56.665132 kubelet[2777]: E0116 21:11:56.664948 2777 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9fe08925ae42432c8b9a6a7a9c1f1d0e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wvb9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8d65f9d5-lfhf5_calico-system(23273946-e0fa-48ec-9cd8-e5c3fd7332d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 21:11:56.668868 containerd[1584]: time="2026-01-16T21:11:56.668814150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 21:11:56.809394 systemd-networkd[1484]: calif653c38eea2: Link UP Jan 16 21:11:56.812424 systemd-networkd[1484]: calif653c38eea2: Gained carrier Jan 16 21:11:56.835363 containerd[1584]: 2026-01-16 21:11:56.600 [INFO][4101] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 21:11:56.835363 containerd[1584]: 2026-01-16 21:11:56.626 [INFO][4101] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580.0.0--p--735bf5553b-k8s-calico--kube--controllers--67cf49786f--th49d-eth0 calico-kube-controllers-67cf49786f- calico-system 4a350016-b4b1-4c4d-a81e-6fa230a1b42f 832 0 2026-01-16 21:11:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67cf49786f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4580.0.0-p-735bf5553b calico-kube-controllers-67cf49786f-th49d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif653c38eea2 [] [] }} ContainerID="9206b8010f22b3afc743e2a446d5459edeb71da771d1cd1fef608d791e996ed5" Namespace="calico-system" Pod="calico-kube-controllers-67cf49786f-th49d" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-calico--kube--controllers--67cf49786f--th49d-" Jan 16 21:11:56.835363 containerd[1584]: 2026-01-16 21:11:56.627 [INFO][4101] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9206b8010f22b3afc743e2a446d5459edeb71da771d1cd1fef608d791e996ed5" Namespace="calico-system" Pod="calico-kube-controllers-67cf49786f-th49d" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-calico--kube--controllers--67cf49786f--th49d-eth0" Jan 16 21:11:56.835363 containerd[1584]: 2026-01-16 21:11:56.718 [INFO][4118] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9206b8010f22b3afc743e2a446d5459edeb71da771d1cd1fef608d791e996ed5" HandleID="k8s-pod-network.9206b8010f22b3afc743e2a446d5459edeb71da771d1cd1fef608d791e996ed5" Workload="ci--4580.0.0--p--735bf5553b-k8s-calico--kube--controllers--67cf49786f--th49d-eth0" Jan 16 21:11:56.836041 containerd[1584]: 2026-01-16 21:11:56.720 [INFO][4118] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9206b8010f22b3afc743e2a446d5459edeb71da771d1cd1fef608d791e996ed5" HandleID="k8s-pod-network.9206b8010f22b3afc743e2a446d5459edeb71da771d1cd1fef608d791e996ed5" Workload="ci--4580.0.0--p--735bf5553b-k8s-calico--kube--controllers--67cf49786f--th49d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001018e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4580.0.0-p-735bf5553b", "pod":"calico-kube-controllers-67cf49786f-th49d", "timestamp":"2026-01-16 21:11:56.718020273 +0000 UTC"}, Hostname:"ci-4580.0.0-p-735bf5553b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:11:56.836041 containerd[1584]: 2026-01-16 21:11:56.721 [INFO][4118] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:11:56.836041 containerd[1584]: 2026-01-16 21:11:56.722 [INFO][4118] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:11:56.836041 containerd[1584]: 2026-01-16 21:11:56.724 [INFO][4118] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580.0.0-p-735bf5553b' Jan 16 21:11:56.836041 containerd[1584]: 2026-01-16 21:11:56.746 [INFO][4118] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9206b8010f22b3afc743e2a446d5459edeb71da771d1cd1fef608d791e996ed5" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:56.836041 containerd[1584]: 2026-01-16 21:11:56.753 [INFO][4118] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:56.836041 containerd[1584]: 2026-01-16 21:11:56.761 [INFO][4118] ipam/ipam.go 511: Trying affinity for 192.168.29.192/26 host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:56.836041 containerd[1584]: 2026-01-16 21:11:56.766 [INFO][4118] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.192/26 host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:56.836041 containerd[1584]: 2026-01-16 21:11:56.770 [INFO][4118] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.192/26 host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:56.836504 containerd[1584]: 2026-01-16 21:11:56.770 [INFO][4118] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.29.192/26 handle="k8s-pod-network.9206b8010f22b3afc743e2a446d5459edeb71da771d1cd1fef608d791e996ed5" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:56.836504 containerd[1584]: 2026-01-16 21:11:56.774 [INFO][4118] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9206b8010f22b3afc743e2a446d5459edeb71da771d1cd1fef608d791e996ed5 Jan 16 21:11:56.836504 containerd[1584]: 2026-01-16 21:11:56.786 [INFO][4118] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.29.192/26 handle="k8s-pod-network.9206b8010f22b3afc743e2a446d5459edeb71da771d1cd1fef608d791e996ed5" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:56.836504 containerd[1584]: 2026-01-16 21:11:56.796 [INFO][4118] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.29.194/26] block=192.168.29.192/26 handle="k8s-pod-network.9206b8010f22b3afc743e2a446d5459edeb71da771d1cd1fef608d791e996ed5" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:56.836504 containerd[1584]: 2026-01-16 21:11:56.796 [INFO][4118] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.194/26] handle="k8s-pod-network.9206b8010f22b3afc743e2a446d5459edeb71da771d1cd1fef608d791e996ed5" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:56.836504 containerd[1584]: 2026-01-16 21:11:56.796 [INFO][4118] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:11:56.836504 containerd[1584]: 2026-01-16 21:11:56.796 [INFO][4118] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.29.194/26] IPv6=[] ContainerID="9206b8010f22b3afc743e2a446d5459edeb71da771d1cd1fef608d791e996ed5" HandleID="k8s-pod-network.9206b8010f22b3afc743e2a446d5459edeb71da771d1cd1fef608d791e996ed5" Workload="ci--4580.0.0--p--735bf5553b-k8s-calico--kube--controllers--67cf49786f--th49d-eth0" Jan 16 21:11:56.837167 containerd[1584]: 2026-01-16 21:11:56.804 [INFO][4101] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9206b8010f22b3afc743e2a446d5459edeb71da771d1cd1fef608d791e996ed5" Namespace="calico-system" Pod="calico-kube-controllers-67cf49786f-th49d" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-calico--kube--controllers--67cf49786f--th49d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--735bf5553b-k8s-calico--kube--controllers--67cf49786f--th49d-eth0", GenerateName:"calico-kube-controllers-67cf49786f-", Namespace:"calico-system", SelfLink:"", UID:"4a350016-b4b1-4c4d-a81e-6fa230a1b42f", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 11, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67cf49786f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-735bf5553b", ContainerID:"", Pod:"calico-kube-controllers-67cf49786f-th49d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif653c38eea2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:11:56.837266 containerd[1584]: 2026-01-16 21:11:56.804 [INFO][4101] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.194/32] ContainerID="9206b8010f22b3afc743e2a446d5459edeb71da771d1cd1fef608d791e996ed5" Namespace="calico-system" Pod="calico-kube-controllers-67cf49786f-th49d" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-calico--kube--controllers--67cf49786f--th49d-eth0" Jan 16 21:11:56.837266 containerd[1584]: 2026-01-16 21:11:56.804 [INFO][4101] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif653c38eea2 ContainerID="9206b8010f22b3afc743e2a446d5459edeb71da771d1cd1fef608d791e996ed5" Namespace="calico-system" Pod="calico-kube-controllers-67cf49786f-th49d" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-calico--kube--controllers--67cf49786f--th49d-eth0" Jan 16 21:11:56.837266 containerd[1584]: 2026-01-16 21:11:56.814 [INFO][4101] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9206b8010f22b3afc743e2a446d5459edeb71da771d1cd1fef608d791e996ed5" Namespace="calico-system" Pod="calico-kube-controllers-67cf49786f-th49d" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-calico--kube--controllers--67cf49786f--th49d-eth0" Jan 16 21:11:56.837333 containerd[1584]: 2026-01-16 21:11:56.815 [INFO][4101] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9206b8010f22b3afc743e2a446d5459edeb71da771d1cd1fef608d791e996ed5" Namespace="calico-system" Pod="calico-kube-controllers-67cf49786f-th49d" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-calico--kube--controllers--67cf49786f--th49d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--735bf5553b-k8s-calico--kube--controllers--67cf49786f--th49d-eth0", GenerateName:"calico-kube-controllers-67cf49786f-", Namespace:"calico-system", SelfLink:"", UID:"4a350016-b4b1-4c4d-a81e-6fa230a1b42f", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 11, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67cf49786f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-735bf5553b", ContainerID:"9206b8010f22b3afc743e2a446d5459edeb71da771d1cd1fef608d791e996ed5", Pod:"calico-kube-controllers-67cf49786f-th49d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif653c38eea2", MAC:"7e:b5:78:02:6a:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:11:56.837434 containerd[1584]: 2026-01-16 21:11:56.831 [INFO][4101] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9206b8010f22b3afc743e2a446d5459edeb71da771d1cd1fef608d791e996ed5" Namespace="calico-system" Pod="calico-kube-controllers-67cf49786f-th49d" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-calico--kube--controllers--67cf49786f--th49d-eth0" Jan 16 21:11:56.904755 containerd[1584]: time="2026-01-16T21:11:56.904663115Z" level=info msg="connecting to shim 9206b8010f22b3afc743e2a446d5459edeb71da771d1cd1fef608d791e996ed5" address="unix:///run/containerd/s/0fee88d85cc03fd441821a3e4922cee76593d2e9cd7bc5ef4d144874224e32b0" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:11:56.935315 systemd-networkd[1484]: calidb8e7147dce: Link UP Jan 16 21:11:56.937823 systemd-networkd[1484]: calidb8e7147dce: Gained carrier Jan 16 21:11:56.986337 containerd[1584]: 2026-01-16 21:11:56.627 [INFO][4094] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 21:11:56.986337 containerd[1584]: 2026-01-16 21:11:56.648 [INFO][4094] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--rsm8g-eth0 calico-apiserver-88cb9dd67- calico-apiserver 98509e25-dadb-4a53-8355-8cb0c0d71e14 838 0 2026-01-16 21:11:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:88cb9dd67 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4580.0.0-p-735bf5553b calico-apiserver-88cb9dd67-rsm8g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidb8e7147dce [] [] }} ContainerID="2ee69ab868f2a1fe3f45e038235ef90013ad42841df866e317cd439780936a1d" Namespace="calico-apiserver" Pod="calico-apiserver-88cb9dd67-rsm8g" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--rsm8g-" Jan 16 21:11:56.986337 containerd[1584]: 2026-01-16 21:11:56.649 [INFO][4094] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ee69ab868f2a1fe3f45e038235ef90013ad42841df866e317cd439780936a1d" Namespace="calico-apiserver" Pod="calico-apiserver-88cb9dd67-rsm8g" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--rsm8g-eth0" Jan 16 21:11:56.986337 containerd[1584]: 2026-01-16 21:11:56.720 [INFO][4124] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ee69ab868f2a1fe3f45e038235ef90013ad42841df866e317cd439780936a1d" HandleID="k8s-pod-network.2ee69ab868f2a1fe3f45e038235ef90013ad42841df866e317cd439780936a1d" Workload="ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--rsm8g-eth0" Jan 16 21:11:56.986928 containerd[1584]: 2026-01-16 21:11:56.721 [INFO][4124] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2ee69ab868f2a1fe3f45e038235ef90013ad42841df866e317cd439780936a1d" HandleID="k8s-pod-network.2ee69ab868f2a1fe3f45e038235ef90013ad42841df866e317cd439780936a1d" Workload="ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--rsm8g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4580.0.0-p-735bf5553b", "pod":"calico-apiserver-88cb9dd67-rsm8g", "timestamp":"2026-01-16 21:11:56.720500872 +0000 UTC"}, Hostname:"ci-4580.0.0-p-735bf5553b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:11:56.986928 containerd[1584]: 2026-01-16 21:11:56.721 [INFO][4124] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:11:56.986928 containerd[1584]: 2026-01-16 21:11:56.796 [INFO][4124] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:11:56.986928 containerd[1584]: 2026-01-16 21:11:56.796 [INFO][4124] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580.0.0-p-735bf5553b' Jan 16 21:11:56.986928 containerd[1584]: 2026-01-16 21:11:56.853 [INFO][4124] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2ee69ab868f2a1fe3f45e038235ef90013ad42841df866e317cd439780936a1d" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:56.986928 containerd[1584]: 2026-01-16 21:11:56.864 [INFO][4124] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:56.986928 containerd[1584]: 2026-01-16 21:11:56.876 [INFO][4124] ipam/ipam.go 511: Trying affinity for 192.168.29.192/26 host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:56.986928 containerd[1584]: 2026-01-16 21:11:56.883 [INFO][4124] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.192/26 host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:56.986928 containerd[1584]: 2026-01-16 21:11:56.892 [INFO][4124] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.192/26 host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:56.987669 containerd[1584]: 2026-01-16 21:11:56.892 [INFO][4124] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.29.192/26 handle="k8s-pod-network.2ee69ab868f2a1fe3f45e038235ef90013ad42841df866e317cd439780936a1d" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:56.987669 containerd[1584]: 2026-01-16 21:11:56.896 [INFO][4124] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2ee69ab868f2a1fe3f45e038235ef90013ad42841df866e317cd439780936a1d Jan 16 21:11:56.987669 containerd[1584]: 2026-01-16 21:11:56.903 [INFO][4124] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.29.192/26 handle="k8s-pod-network.2ee69ab868f2a1fe3f45e038235ef90013ad42841df866e317cd439780936a1d" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:56.987669 containerd[1584]: 2026-01-16 21:11:56.919 [INFO][4124] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.29.195/26] block=192.168.29.192/26 handle="k8s-pod-network.2ee69ab868f2a1fe3f45e038235ef90013ad42841df866e317cd439780936a1d" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:56.987669 containerd[1584]: 2026-01-16 21:11:56.920 [INFO][4124] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.195/26] handle="k8s-pod-network.2ee69ab868f2a1fe3f45e038235ef90013ad42841df866e317cd439780936a1d" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:56.987669 containerd[1584]: 2026-01-16 21:11:56.920 [INFO][4124] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:11:56.987669 containerd[1584]: 2026-01-16 21:11:56.920 [INFO][4124] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.29.195/26] IPv6=[] ContainerID="2ee69ab868f2a1fe3f45e038235ef90013ad42841df866e317cd439780936a1d" HandleID="k8s-pod-network.2ee69ab868f2a1fe3f45e038235ef90013ad42841df866e317cd439780936a1d" Workload="ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--rsm8g-eth0" Jan 16 21:11:56.987110 systemd[1]: Started cri-containerd-9206b8010f22b3afc743e2a446d5459edeb71da771d1cd1fef608d791e996ed5.scope - libcontainer container 9206b8010f22b3afc743e2a446d5459edeb71da771d1cd1fef608d791e996ed5. Jan 16 21:11:56.988361 containerd[1584]: 2026-01-16 21:11:56.929 [INFO][4094] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ee69ab868f2a1fe3f45e038235ef90013ad42841df866e317cd439780936a1d" Namespace="calico-apiserver" Pod="calico-apiserver-88cb9dd67-rsm8g" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--rsm8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--rsm8g-eth0", GenerateName:"calico-apiserver-88cb9dd67-", Namespace:"calico-apiserver", SelfLink:"", UID:"98509e25-dadb-4a53-8355-8cb0c0d71e14", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 11, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88cb9dd67", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-735bf5553b", ContainerID:"", Pod:"calico-apiserver-88cb9dd67-rsm8g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb8e7147dce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:11:56.988445 containerd[1584]: 2026-01-16 21:11:56.930 [INFO][4094] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.195/32] ContainerID="2ee69ab868f2a1fe3f45e038235ef90013ad42841df866e317cd439780936a1d" Namespace="calico-apiserver" Pod="calico-apiserver-88cb9dd67-rsm8g" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--rsm8g-eth0" Jan 16 21:11:56.988445 containerd[1584]: 2026-01-16 21:11:56.930 [INFO][4094] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb8e7147dce ContainerID="2ee69ab868f2a1fe3f45e038235ef90013ad42841df866e317cd439780936a1d" Namespace="calico-apiserver" Pod="calico-apiserver-88cb9dd67-rsm8g" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--rsm8g-eth0" Jan 16 21:11:56.988445 containerd[1584]: 2026-01-16 21:11:56.935 [INFO][4094] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ee69ab868f2a1fe3f45e038235ef90013ad42841df866e317cd439780936a1d" Namespace="calico-apiserver" Pod="calico-apiserver-88cb9dd67-rsm8g" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--rsm8g-eth0" Jan 16 21:11:56.989712 containerd[1584]: 2026-01-16 21:11:56.936 [INFO][4094] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ee69ab868f2a1fe3f45e038235ef90013ad42841df866e317cd439780936a1d" Namespace="calico-apiserver" Pod="calico-apiserver-88cb9dd67-rsm8g" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--rsm8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--rsm8g-eth0", GenerateName:"calico-apiserver-88cb9dd67-", Namespace:"calico-apiserver", SelfLink:"", UID:"98509e25-dadb-4a53-8355-8cb0c0d71e14", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 11, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88cb9dd67", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-735bf5553b", ContainerID:"2ee69ab868f2a1fe3f45e038235ef90013ad42841df866e317cd439780936a1d", Pod:"calico-apiserver-88cb9dd67-rsm8g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb8e7147dce", MAC:"aa:02:b4:c7:c3:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:11:56.989952 containerd[1584]: 2026-01-16 21:11:56.977 [INFO][4094] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ee69ab868f2a1fe3f45e038235ef90013ad42841df866e317cd439780936a1d" Namespace="calico-apiserver" Pod="calico-apiserver-88cb9dd67-rsm8g" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--rsm8g-eth0" Jan 16 21:11:57.019000 audit: BPF prog-id=178 op=LOAD Jan 16 21:11:57.020000 audit: BPF prog-id=179 op=LOAD Jan 16 21:11:57.020000 audit[4160]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4149 pid=4160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:57.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303662383031306632326233616663373433653261343436643534 Jan 16 21:11:57.020000 audit: BPF prog-id=179 op=UNLOAD Jan 16 21:11:57.020000 audit[4160]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4149 pid=4160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:57.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303662383031306632326233616663373433653261343436643534 Jan 16 21:11:57.021000 audit: BPF prog-id=180 op=LOAD Jan 16 21:11:57.021000 audit[4160]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4149 pid=4160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:57.021000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303662383031306632326233616663373433653261343436643534 Jan 16 21:11:57.021000 audit: BPF prog-id=181 op=LOAD Jan 16 21:11:57.021000 audit[4160]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4149 pid=4160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:57.021000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303662383031306632326233616663373433653261343436643534 Jan 16 21:11:57.021000 audit: BPF prog-id=181 op=UNLOAD Jan 16 21:11:57.021000 audit[4160]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4149 pid=4160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:57.021000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303662383031306632326233616663373433653261343436643534 Jan 16 21:11:57.021000 audit: BPF prog-id=180 op=UNLOAD Jan 16 21:11:57.021000 audit[4160]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4149 pid=4160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:57.021000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303662383031306632326233616663373433653261343436643534 Jan 16 21:11:57.025223 containerd[1584]: time="2026-01-16T21:11:57.025186620Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:11:57.022000 audit: BPF prog-id=182 op=LOAD Jan 16 21:11:57.022000 audit[4160]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4149 pid=4160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:57.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303662383031306632326233616663373433653261343436643534 Jan 16 21:11:57.027131 containerd[1584]: time="2026-01-16T21:11:57.026907010Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 21:11:57.027515 containerd[1584]: time="2026-01-16T21:11:57.027491690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 21:11:57.029003 kubelet[2777]: E0116 21:11:57.028901 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:11:57.029003 kubelet[2777]: E0116 21:11:57.028954 2777 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:11:57.029166 kubelet[2777]: E0116 21:11:57.029084 2777 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wvb9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8d65f9d5-lfhf5_calico-system(23273946-e0fa-48ec-9cd8-e5c3fd7332d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 21:11:57.030537 kubelet[2777]: E0116 21:11:57.030462 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8d65f9d5-lfhf5" podUID="23273946-e0fa-48ec-9cd8-e5c3fd7332d3" Jan 16 21:11:57.033814 containerd[1584]: time="2026-01-16T21:11:57.032921176Z" level=info msg="connecting to shim 2ee69ab868f2a1fe3f45e038235ef90013ad42841df866e317cd439780936a1d" address="unix:///run/containerd/s/b16581f6dbe5f254d01790049457893004ca44c76b1c8dcf14991ba7d314b939" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:11:57.085233 systemd[1]: Started cri-containerd-2ee69ab868f2a1fe3f45e038235ef90013ad42841df866e317cd439780936a1d.scope - libcontainer container 2ee69ab868f2a1fe3f45e038235ef90013ad42841df866e317cd439780936a1d. Jan 16 21:11:57.105618 containerd[1584]: time="2026-01-16T21:11:57.105531035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67cf49786f-th49d,Uid:4a350016-b4b1-4c4d-a81e-6fa230a1b42f,Namespace:calico-system,Attempt:0,} returns sandbox id \"9206b8010f22b3afc743e2a446d5459edeb71da771d1cd1fef608d791e996ed5\"" Jan 16 21:11:57.108800 containerd[1584]: time="2026-01-16T21:11:57.108464011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 21:11:57.113000 audit: BPF prog-id=183 op=LOAD Jan 16 21:11:57.114000 audit: BPF prog-id=184 op=LOAD Jan 16 21:11:57.114000 audit[4205]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4192 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:57.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265653639616238363866326131666533663435653033383233356566 Jan 16 21:11:57.114000 audit: BPF prog-id=184 op=UNLOAD Jan 16 21:11:57.114000 audit[4205]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4192 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:57.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265653639616238363866326131666533663435653033383233356566 Jan 16 21:11:57.114000 audit: BPF prog-id=185 op=LOAD Jan 16 21:11:57.114000 audit[4205]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4192 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:57.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265653639616238363866326131666533663435653033383233356566 Jan 16 21:11:57.114000 audit: BPF prog-id=186 op=LOAD Jan 16 21:11:57.114000 audit[4205]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4192 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:57.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265653639616238363866326131666533663435653033383233356566 Jan 16 21:11:57.114000 audit: BPF prog-id=186 op=UNLOAD Jan 16 21:11:57.114000 audit[4205]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4192 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:57.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265653639616238363866326131666533663435653033383233356566 Jan 16 21:11:57.114000 audit: BPF prog-id=185 op=UNLOAD Jan 16 21:11:57.114000 audit[4205]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4192 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:57.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265653639616238363866326131666533663435653033383233356566 Jan 16 21:11:57.114000 audit: BPF prog-id=187 op=LOAD Jan 16 21:11:57.114000 audit[4205]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4192 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:57.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265653639616238363866326131666533663435653033383233356566 Jan 16 21:11:57.178504 containerd[1584]: time="2026-01-16T21:11:57.178450879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88cb9dd67-rsm8g,Uid:98509e25-dadb-4a53-8355-8cb0c0d71e14,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2ee69ab868f2a1fe3f45e038235ef90013ad42841df866e317cd439780936a1d\"" Jan 16 21:11:57.381195 systemd-networkd[1484]: cali5af656c0c98: Gained IPv6LL Jan 16 21:11:57.474793 containerd[1584]: time="2026-01-16T21:11:57.474548297Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:11:57.476216 containerd[1584]: time="2026-01-16T21:11:57.476072639Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 21:11:57.476216 containerd[1584]: time="2026-01-16T21:11:57.476080409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 21:11:57.476751 kubelet[2777]: E0116 21:11:57.476578 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:11:57.476751 kubelet[2777]: E0116 21:11:57.476635 2777 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:11:57.477663 kubelet[2777]: E0116 21:11:57.477484 2777 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mzr5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-67cf49786f-th49d_calico-system(4a350016-b4b1-4c4d-a81e-6fa230a1b42f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 21:11:57.478174 containerd[1584]: time="2026-01-16T21:11:57.478129005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:11:57.479335 kubelet[2777]: E0116 21:11:57.479227 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67cf49786f-th49d" podUID="4a350016-b4b1-4c4d-a81e-6fa230a1b42f" Jan 16 21:11:57.820457 containerd[1584]: time="2026-01-16T21:11:57.820349305Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:11:57.822627 containerd[1584]: time="2026-01-16T21:11:57.822556428Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:11:57.822860 containerd[1584]: time="2026-01-16T21:11:57.822679294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:11:57.822937 kubelet[2777]: E0116 21:11:57.822903 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:11:57.823659 kubelet[2777]: E0116 21:11:57.822951 2777 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:11:57.823659 kubelet[2777]: E0116 21:11:57.823120 2777 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4h9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-88cb9dd67-rsm8g_calico-apiserver(98509e25-dadb-4a53-8355-8cb0c0d71e14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:11:57.824622 kubelet[2777]: E0116 21:11:57.824539 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-88cb9dd67-rsm8g" podUID="98509e25-dadb-4a53-8355-8cb0c0d71e14" Jan 16 21:11:57.902716 kubelet[2777]: E0116 21:11:57.902417 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-88cb9dd67-rsm8g" podUID="98509e25-dadb-4a53-8355-8cb0c0d71e14" Jan 16 21:11:57.906757 kubelet[2777]: E0116 21:11:57.904075 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67cf49786f-th49d" podUID="4a350016-b4b1-4c4d-a81e-6fa230a1b42f" Jan 16 21:11:57.907634 kubelet[2777]: E0116 21:11:57.905693 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8d65f9d5-lfhf5" podUID="23273946-e0fa-48ec-9cd8-e5c3fd7332d3" Jan 16 21:11:57.949893 systemd-networkd[1484]: calif653c38eea2: Gained IPv6LL Jan 16 21:11:57.989000 audit[4257]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=4257 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:11:57.989000 audit[4257]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe9831afb0 a2=0 a3=7ffe9831af9c items=0 ppid=2909 pid=4257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:57.989000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:57.993000 audit[4257]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=4257 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:11:57.993000 audit[4257]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe9831afb0 a2=0 a3=0 items=0 ppid=2909 pid=4257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:57.993000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:58.011000 audit[4259]: NETFILTER_CFG table=filter:121 family=2 entries=22 op=nft_register_rule pid=4259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:11:58.011000 audit[4259]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffea035eaf0 a2=0 a3=7ffea035eadc items=0 ppid=2909 pid=4259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:58.011000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:58.019000 audit[4259]: NETFILTER_CFG table=nat:122 family=2 entries=12 op=nft_register_rule pid=4259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:11:58.019000 audit[4259]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffea035eaf0 a2=0 a3=0 items=0 ppid=2909 pid=4259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:58.019000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:11:58.532611 kubelet[2777]: E0116 21:11:58.532553 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:11:58.536170 containerd[1584]: time="2026-01-16T21:11:58.536092535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88cb9dd67-54lnr,Uid:3c7766f8-3124-4eac-b0a1-e1f23a7c1e1f,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:11:58.537243 containerd[1584]: time="2026-01-16T21:11:58.537079225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fqdjm,Uid:948ce78a-6a96-42a6-a8f0-360a2ec834df,Namespace:calico-system,Attempt:0,}" Jan 16 21:11:58.537243 containerd[1584]: time="2026-01-16T21:11:58.537180713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-snhm5,Uid:c84a3cf3-ba8e-4381-80a9-376236ad5286,Namespace:kube-system,Attempt:0,}" Jan 16 21:11:58.537492 containerd[1584]: time="2026-01-16T21:11:58.537256890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zcnrm,Uid:3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529,Namespace:calico-system,Attempt:0,}" Jan 16 21:11:58.719902 systemd-networkd[1484]: calidb8e7147dce: Gained IPv6LL Jan 16 21:11:58.916936 kubelet[2777]: E0116 21:11:58.916160 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67cf49786f-th49d" podUID="4a350016-b4b1-4c4d-a81e-6fa230a1b42f" Jan 16 21:11:58.920275 kubelet[2777]: E0116 21:11:58.920205 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-88cb9dd67-rsm8g" podUID="98509e25-dadb-4a53-8355-8cb0c0d71e14" Jan 16 21:11:59.143571 systemd-networkd[1484]: cali4cbf4ea1a60: Link UP Jan 16 21:11:59.143772 systemd-networkd[1484]: cali4cbf4ea1a60: Gained carrier Jan 16 21:11:59.194316 containerd[1584]: 2026-01-16 21:11:58.778 [INFO][4276] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 21:11:59.194316 containerd[1584]: 2026-01-16 21:11:58.812 [INFO][4276] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580.0.0--p--735bf5553b-k8s-csi--node--driver--fqdjm-eth0 csi-node-driver- calico-system 948ce78a-6a96-42a6-a8f0-360a2ec834df 723 0 2026-01-16 21:11:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4580.0.0-p-735bf5553b csi-node-driver-fqdjm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4cbf4ea1a60 [] [] }} ContainerID="edbcabdb8b61846f05ea7441b67870a03893348cd080ae190639f2ff915fc9e8" Namespace="calico-system" Pod="csi-node-driver-fqdjm" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-csi--node--driver--fqdjm-" Jan 16 21:11:59.194316 containerd[1584]: 2026-01-16 21:11:58.812 [INFO][4276] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="edbcabdb8b61846f05ea7441b67870a03893348cd080ae190639f2ff915fc9e8" Namespace="calico-system" Pod="csi-node-driver-fqdjm" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-csi--node--driver--fqdjm-eth0" Jan 16 21:11:59.194316 containerd[1584]: 2026-01-16 21:11:58.943 [INFO][4339] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="edbcabdb8b61846f05ea7441b67870a03893348cd080ae190639f2ff915fc9e8" HandleID="k8s-pod-network.edbcabdb8b61846f05ea7441b67870a03893348cd080ae190639f2ff915fc9e8" Workload="ci--4580.0.0--p--735bf5553b-k8s-csi--node--driver--fqdjm-eth0" Jan 16 21:11:59.196322 containerd[1584]: 2026-01-16 21:11:58.945 [INFO][4339] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="edbcabdb8b61846f05ea7441b67870a03893348cd080ae190639f2ff915fc9e8" HandleID="k8s-pod-network.edbcabdb8b61846f05ea7441b67870a03893348cd080ae190639f2ff915fc9e8" Workload="ci--4580.0.0--p--735bf5553b-k8s-csi--node--driver--fqdjm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e3080), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4580.0.0-p-735bf5553b", "pod":"csi-node-driver-fqdjm", "timestamp":"2026-01-16 21:11:58.943203988 +0000 UTC"}, Hostname:"ci-4580.0.0-p-735bf5553b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:11:59.196322 containerd[1584]: 2026-01-16 21:11:58.946 [INFO][4339] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:11:59.196322 containerd[1584]: 2026-01-16 21:11:58.948 [INFO][4339] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:11:59.196322 containerd[1584]: 2026-01-16 21:11:58.948 [INFO][4339] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580.0.0-p-735bf5553b' Jan 16 21:11:59.196322 containerd[1584]: 2026-01-16 21:11:59.001 [INFO][4339] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.edbcabdb8b61846f05ea7441b67870a03893348cd080ae190639f2ff915fc9e8" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.196322 containerd[1584]: 2026-01-16 21:11:59.017 [INFO][4339] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.196322 containerd[1584]: 2026-01-16 21:11:59.039 [INFO][4339] ipam/ipam.go 511: Trying affinity for 192.168.29.192/26 host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.196322 containerd[1584]: 2026-01-16 21:11:59.052 [INFO][4339] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.192/26 host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.196322 containerd[1584]: 2026-01-16 21:11:59.069 [INFO][4339] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.192/26 host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.197529 containerd[1584]: 2026-01-16 21:11:59.069 [INFO][4339] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.29.192/26 handle="k8s-pod-network.edbcabdb8b61846f05ea7441b67870a03893348cd080ae190639f2ff915fc9e8" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.197529 containerd[1584]: 2026-01-16 21:11:59.078 [INFO][4339] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.edbcabdb8b61846f05ea7441b67870a03893348cd080ae190639f2ff915fc9e8 Jan 16 21:11:59.197529 containerd[1584]: 2026-01-16 21:11:59.089 [INFO][4339] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.29.192/26 handle="k8s-pod-network.edbcabdb8b61846f05ea7441b67870a03893348cd080ae190639f2ff915fc9e8" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.197529 containerd[1584]: 2026-01-16 21:11:59.131 [INFO][4339] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.29.196/26] block=192.168.29.192/26 handle="k8s-pod-network.edbcabdb8b61846f05ea7441b67870a03893348cd080ae190639f2ff915fc9e8" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.197529 containerd[1584]: 2026-01-16 21:11:59.131 [INFO][4339] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.196/26] handle="k8s-pod-network.edbcabdb8b61846f05ea7441b67870a03893348cd080ae190639f2ff915fc9e8" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.197529 containerd[1584]: 2026-01-16 21:11:59.131 [INFO][4339] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:11:59.197529 containerd[1584]: 2026-01-16 21:11:59.131 [INFO][4339] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.29.196/26] IPv6=[] ContainerID="edbcabdb8b61846f05ea7441b67870a03893348cd080ae190639f2ff915fc9e8" HandleID="k8s-pod-network.edbcabdb8b61846f05ea7441b67870a03893348cd080ae190639f2ff915fc9e8" Workload="ci--4580.0.0--p--735bf5553b-k8s-csi--node--driver--fqdjm-eth0" Jan 16 21:11:59.198192 containerd[1584]: 2026-01-16 21:11:59.138 [INFO][4276] cni-plugin/k8s.go 418: Populated endpoint ContainerID="edbcabdb8b61846f05ea7441b67870a03893348cd080ae190639f2ff915fc9e8" Namespace="calico-system" Pod="csi-node-driver-fqdjm" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-csi--node--driver--fqdjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--735bf5553b-k8s-csi--node--driver--fqdjm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"948ce78a-6a96-42a6-a8f0-360a2ec834df", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 11, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-735bf5553b", ContainerID:"", Pod:"csi-node-driver-fqdjm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4cbf4ea1a60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:11:59.198314 containerd[1584]: 2026-01-16 21:11:59.138 [INFO][4276] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.196/32] ContainerID="edbcabdb8b61846f05ea7441b67870a03893348cd080ae190639f2ff915fc9e8" Namespace="calico-system" Pod="csi-node-driver-fqdjm" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-csi--node--driver--fqdjm-eth0" Jan 16 21:11:59.198314 containerd[1584]: 2026-01-16 21:11:59.138 [INFO][4276] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4cbf4ea1a60 ContainerID="edbcabdb8b61846f05ea7441b67870a03893348cd080ae190639f2ff915fc9e8" Namespace="calico-system" Pod="csi-node-driver-fqdjm" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-csi--node--driver--fqdjm-eth0" Jan 16 21:11:59.198314 containerd[1584]: 2026-01-16 21:11:59.144 [INFO][4276] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="edbcabdb8b61846f05ea7441b67870a03893348cd080ae190639f2ff915fc9e8" Namespace="calico-system" Pod="csi-node-driver-fqdjm" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-csi--node--driver--fqdjm-eth0" Jan 16 21:11:59.198450 containerd[1584]: 2026-01-16 21:11:59.146 [INFO][4276] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="edbcabdb8b61846f05ea7441b67870a03893348cd080ae190639f2ff915fc9e8" Namespace="calico-system" Pod="csi-node-driver-fqdjm" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-csi--node--driver--fqdjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--735bf5553b-k8s-csi--node--driver--fqdjm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"948ce78a-6a96-42a6-a8f0-360a2ec834df", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 11, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-735bf5553b", ContainerID:"edbcabdb8b61846f05ea7441b67870a03893348cd080ae190639f2ff915fc9e8", Pod:"csi-node-driver-fqdjm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4cbf4ea1a60", MAC:"2a:c7:ef:04:23:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:11:59.198545 containerd[1584]: 2026-01-16 21:11:59.189 [INFO][4276] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="edbcabdb8b61846f05ea7441b67870a03893348cd080ae190639f2ff915fc9e8" Namespace="calico-system" Pod="csi-node-driver-fqdjm" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-csi--node--driver--fqdjm-eth0" Jan 16 21:11:59.258649 containerd[1584]: time="2026-01-16T21:11:59.258021048Z" level=info msg="connecting to shim edbcabdb8b61846f05ea7441b67870a03893348cd080ae190639f2ff915fc9e8" address="unix:///run/containerd/s/bb1d0008189ddecd3c0c5e17c775526cfd148da5488cb2120055e4167a389f9a" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:11:59.305390 systemd[1]: Started cri-containerd-edbcabdb8b61846f05ea7441b67870a03893348cd080ae190639f2ff915fc9e8.scope - libcontainer container edbcabdb8b61846f05ea7441b67870a03893348cd080ae190639f2ff915fc9e8. Jan 16 21:11:59.383894 systemd-networkd[1484]: cali5edf02bd74e: Link UP Jan 16 21:11:59.386996 systemd-networkd[1484]: cali5edf02bd74e: Gained carrier Jan 16 21:11:59.434933 containerd[1584]: 2026-01-16 21:11:58.772 [INFO][4274] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 21:11:59.434933 containerd[1584]: 2026-01-16 21:11:58.813 [INFO][4274] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--54lnr-eth0 calico-apiserver-88cb9dd67- calico-apiserver 3c7766f8-3124-4eac-b0a1-e1f23a7c1e1f 837 0 2026-01-16 21:11:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:88cb9dd67 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4580.0.0-p-735bf5553b calico-apiserver-88cb9dd67-54lnr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5edf02bd74e [] [] }} ContainerID="4d6cdeedf98059da50a650e2ccc7f3235a11b931f760396e194ae05580dcae75" Namespace="calico-apiserver" Pod="calico-apiserver-88cb9dd67-54lnr" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--54lnr-" Jan 16 21:11:59.434933 containerd[1584]: 2026-01-16 21:11:58.815 [INFO][4274] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d6cdeedf98059da50a650e2ccc7f3235a11b931f760396e194ae05580dcae75" Namespace="calico-apiserver" Pod="calico-apiserver-88cb9dd67-54lnr" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--54lnr-eth0" Jan 16 21:11:59.434933 containerd[1584]: 2026-01-16 21:11:58.956 [INFO][4344] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d6cdeedf98059da50a650e2ccc7f3235a11b931f760396e194ae05580dcae75" HandleID="k8s-pod-network.4d6cdeedf98059da50a650e2ccc7f3235a11b931f760396e194ae05580dcae75" Workload="ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--54lnr-eth0" Jan 16 21:11:59.437096 containerd[1584]: 2026-01-16 21:11:58.960 [INFO][4344] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4d6cdeedf98059da50a650e2ccc7f3235a11b931f760396e194ae05580dcae75" HandleID="k8s-pod-network.4d6cdeedf98059da50a650e2ccc7f3235a11b931f760396e194ae05580dcae75" Workload="ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--54lnr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000422390), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4580.0.0-p-735bf5553b", "pod":"calico-apiserver-88cb9dd67-54lnr", "timestamp":"2026-01-16 21:11:58.956041745 +0000 UTC"}, Hostname:"ci-4580.0.0-p-735bf5553b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:11:59.437096 containerd[1584]: 2026-01-16 21:11:58.960 [INFO][4344] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:11:59.437096 containerd[1584]: 2026-01-16 21:11:59.132 [INFO][4344] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:11:59.437096 containerd[1584]: 2026-01-16 21:11:59.132 [INFO][4344] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580.0.0-p-735bf5553b' Jan 16 21:11:59.437096 containerd[1584]: 2026-01-16 21:11:59.190 [INFO][4344] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4d6cdeedf98059da50a650e2ccc7f3235a11b931f760396e194ae05580dcae75" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.437096 containerd[1584]: 2026-01-16 21:11:59.237 [INFO][4344] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.437096 containerd[1584]: 2026-01-16 21:11:59.283 [INFO][4344] ipam/ipam.go 511: Trying affinity for 192.168.29.192/26 host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.437096 containerd[1584]: 2026-01-16 21:11:59.311 [INFO][4344] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.192/26 host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.437096 containerd[1584]: 2026-01-16 21:11:59.315 [INFO][4344] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.192/26 host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.439528 containerd[1584]: 2026-01-16 21:11:59.315 [INFO][4344] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.29.192/26 handle="k8s-pod-network.4d6cdeedf98059da50a650e2ccc7f3235a11b931f760396e194ae05580dcae75" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.439528 containerd[1584]: 2026-01-16 21:11:59.318 [INFO][4344] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4d6cdeedf98059da50a650e2ccc7f3235a11b931f760396e194ae05580dcae75 Jan 16 21:11:59.439528 containerd[1584]: 2026-01-16 21:11:59.328 [INFO][4344] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.29.192/26 handle="k8s-pod-network.4d6cdeedf98059da50a650e2ccc7f3235a11b931f760396e194ae05580dcae75" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.439528 containerd[1584]: 2026-01-16 21:11:59.356 [INFO][4344] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.29.197/26] block=192.168.29.192/26 handle="k8s-pod-network.4d6cdeedf98059da50a650e2ccc7f3235a11b931f760396e194ae05580dcae75" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.439528 containerd[1584]: 2026-01-16 21:11:59.356 [INFO][4344] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.197/26] handle="k8s-pod-network.4d6cdeedf98059da50a650e2ccc7f3235a11b931f760396e194ae05580dcae75" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.439528 containerd[1584]: 2026-01-16 21:11:59.356 [INFO][4344] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:11:59.439528 containerd[1584]: 2026-01-16 21:11:59.356 [INFO][4344] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.29.197/26] IPv6=[] ContainerID="4d6cdeedf98059da50a650e2ccc7f3235a11b931f760396e194ae05580dcae75" HandleID="k8s-pod-network.4d6cdeedf98059da50a650e2ccc7f3235a11b931f760396e194ae05580dcae75" Workload="ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--54lnr-eth0" Jan 16 21:11:59.440376 containerd[1584]: 2026-01-16 21:11:59.369 [INFO][4274] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d6cdeedf98059da50a650e2ccc7f3235a11b931f760396e194ae05580dcae75" Namespace="calico-apiserver" Pod="calico-apiserver-88cb9dd67-54lnr" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--54lnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--54lnr-eth0", GenerateName:"calico-apiserver-88cb9dd67-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c7766f8-3124-4eac-b0a1-e1f23a7c1e1f", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 11, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88cb9dd67", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-735bf5553b", ContainerID:"", Pod:"calico-apiserver-88cb9dd67-54lnr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5edf02bd74e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:11:59.440511 containerd[1584]: 2026-01-16 21:11:59.369 [INFO][4274] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.197/32] ContainerID="4d6cdeedf98059da50a650e2ccc7f3235a11b931f760396e194ae05580dcae75" Namespace="calico-apiserver" Pod="calico-apiserver-88cb9dd67-54lnr" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--54lnr-eth0" Jan 16 21:11:59.440511 containerd[1584]: 2026-01-16 21:11:59.369 [INFO][4274] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5edf02bd74e ContainerID="4d6cdeedf98059da50a650e2ccc7f3235a11b931f760396e194ae05580dcae75" Namespace="calico-apiserver" Pod="calico-apiserver-88cb9dd67-54lnr" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--54lnr-eth0" Jan 16 21:11:59.440511 containerd[1584]: 2026-01-16 21:11:59.386 [INFO][4274] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d6cdeedf98059da50a650e2ccc7f3235a11b931f760396e194ae05580dcae75" Namespace="calico-apiserver" Pod="calico-apiserver-88cb9dd67-54lnr" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--54lnr-eth0" Jan 16 21:11:59.440639 containerd[1584]: 2026-01-16 21:11:59.388 [INFO][4274] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d6cdeedf98059da50a650e2ccc7f3235a11b931f760396e194ae05580dcae75" Namespace="calico-apiserver" Pod="calico-apiserver-88cb9dd67-54lnr" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--54lnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--54lnr-eth0", GenerateName:"calico-apiserver-88cb9dd67-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c7766f8-3124-4eac-b0a1-e1f23a7c1e1f", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 11, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88cb9dd67", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-735bf5553b", ContainerID:"4d6cdeedf98059da50a650e2ccc7f3235a11b931f760396e194ae05580dcae75", Pod:"calico-apiserver-88cb9dd67-54lnr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5edf02bd74e", MAC:"1e:26:58:d4:ea:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:11:59.442258 containerd[1584]: 2026-01-16 21:11:59.417 [INFO][4274] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d6cdeedf98059da50a650e2ccc7f3235a11b931f760396e194ae05580dcae75" Namespace="calico-apiserver" Pod="calico-apiserver-88cb9dd67-54lnr" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-calico--apiserver--88cb9dd67--54lnr-eth0" Jan 16 21:11:59.543855 kernel: kauditd_printk_skb: 83 callbacks suppressed Jan 16 21:11:59.544025 kernel: audit: type=1334 audit(1768597919.539:605): prog-id=188 op=LOAD Jan 16 21:11:59.539000 audit: BPF prog-id=188 op=LOAD Jan 16 21:11:59.544000 audit: BPF prog-id=189 op=LOAD Jan 16 21:11:59.549814 kernel: audit: type=1334 audit(1768597919.544:606): prog-id=189 op=LOAD Jan 16 21:11:59.544000 audit[4388]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4376 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:59.561701 kernel: audit: type=1300 audit(1768597919.544:606): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4376 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:59.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564626361626462386236313834366630356561373434316236373837 Jan 16 21:11:59.574081 kernel: audit: type=1327 audit(1768597919.544:606): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564626361626462386236313834366630356561373434316236373837 Jan 16 21:11:59.574235 kernel: audit: type=1334 audit(1768597919.550:607): prog-id=189 op=UNLOAD Jan 16 21:11:59.574272 kernel: audit: type=1300 audit(1768597919.550:607): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4376 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:59.550000 audit: BPF prog-id=189 op=UNLOAD Jan 16 21:11:59.550000 audit[4388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4376 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:59.575929 kernel: audit: type=1327 audit(1768597919.550:607): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564626361626462386236313834366630356561373434316236373837 Jan 16 21:11:59.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564626361626462386236313834366630356561373434316236373837 Jan 16 21:11:59.581157 kernel: audit: type=1334 audit(1768597919.559:608): prog-id=190 op=LOAD Jan 16 21:11:59.559000 audit: BPF prog-id=190 op=LOAD Jan 16 21:11:59.559000 audit[4388]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4376 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:59.583555 kernel: audit: type=1300 audit(1768597919.559:608): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4376 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:59.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564626361626462386236313834366630356561373434316236373837 Jan 16 21:11:59.588983 kernel: audit: type=1327 audit(1768597919.559:608): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564626361626462386236313834366630356561373434316236373837 Jan 16 21:11:59.559000 audit: BPF prog-id=191 op=LOAD Jan 16 21:11:59.559000 audit[4388]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4376 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:59.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564626361626462386236313834366630356561373434316236373837 Jan 16 21:11:59.559000 audit: BPF prog-id=191 op=UNLOAD Jan 16 21:11:59.559000 audit[4388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4376 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:59.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564626361626462386236313834366630356561373434316236373837 Jan 16 21:11:59.559000 audit: BPF prog-id=190 op=UNLOAD Jan 16 21:11:59.559000 audit[4388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4376 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:59.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564626361626462386236313834366630356561373434316236373837 Jan 16 21:11:59.599000 audit: BPF prog-id=192 op=LOAD Jan 16 21:11:59.599000 audit[4388]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4376 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:59.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564626361626462386236313834366630356561373434316236373837 Jan 16 21:11:59.653061 systemd-networkd[1484]: calia68a349e118: Link UP Jan 16 21:11:59.654869 systemd-networkd[1484]: calia68a349e118: Gained carrier Jan 16 21:11:59.678294 containerd[1584]: time="2026-01-16T21:11:59.677926964Z" level=info msg="connecting to shim 4d6cdeedf98059da50a650e2ccc7f3235a11b931f760396e194ae05580dcae75" address="unix:///run/containerd/s/621a927f0997e9e8c438c739c4309d74d6e82c5f157068591364165b3660f386" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:11:59.703858 containerd[1584]: time="2026-01-16T21:11:59.703799765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fqdjm,Uid:948ce78a-6a96-42a6-a8f0-360a2ec834df,Namespace:calico-system,Attempt:0,} returns sandbox id \"edbcabdb8b61846f05ea7441b67870a03893348cd080ae190639f2ff915fc9e8\"" Jan 16 21:11:59.717144 containerd[1584]: time="2026-01-16T21:11:59.716917795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 21:11:59.739428 containerd[1584]: 2026-01-16 21:11:58.725 [INFO][4288] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 21:11:59.739428 containerd[1584]: 2026-01-16 21:11:58.765 [INFO][4288] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580.0.0--p--735bf5553b-k8s-goldmane--666569f655--zcnrm-eth0 goldmane-666569f655- calico-system 3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529 834 0 2026-01-16 21:11:31 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4580.0.0-p-735bf5553b goldmane-666569f655-zcnrm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia68a349e118 [] [] }} ContainerID="c8a7d8727482cea65dcc0e307cb357883667bb027cf22a7d95ef49fcd352bc68" Namespace="calico-system" Pod="goldmane-666569f655-zcnrm" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-goldmane--666569f655--zcnrm-" Jan 16 21:11:59.739428 containerd[1584]: 2026-01-16 21:11:58.766 [INFO][4288] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8a7d8727482cea65dcc0e307cb357883667bb027cf22a7d95ef49fcd352bc68" Namespace="calico-system" Pod="goldmane-666569f655-zcnrm" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-goldmane--666569f655--zcnrm-eth0" Jan 16 21:11:59.739428 containerd[1584]: 2026-01-16 21:11:58.996 [INFO][4327] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8a7d8727482cea65dcc0e307cb357883667bb027cf22a7d95ef49fcd352bc68" HandleID="k8s-pod-network.c8a7d8727482cea65dcc0e307cb357883667bb027cf22a7d95ef49fcd352bc68" Workload="ci--4580.0.0--p--735bf5553b-k8s-goldmane--666569f655--zcnrm-eth0" Jan 16 21:11:59.740209 containerd[1584]: 2026-01-16 21:11:59.000 [INFO][4327] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c8a7d8727482cea65dcc0e307cb357883667bb027cf22a7d95ef49fcd352bc68" HandleID="k8s-pod-network.c8a7d8727482cea65dcc0e307cb357883667bb027cf22a7d95ef49fcd352bc68" Workload="ci--4580.0.0--p--735bf5553b-k8s-goldmane--666569f655--zcnrm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f0a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4580.0.0-p-735bf5553b", "pod":"goldmane-666569f655-zcnrm", "timestamp":"2026-01-16 21:11:58.996292355 +0000 UTC"}, Hostname:"ci-4580.0.0-p-735bf5553b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:11:59.740209 containerd[1584]: 2026-01-16 21:11:59.001 [INFO][4327] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:11:59.740209 containerd[1584]: 2026-01-16 21:11:59.356 [INFO][4327] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:11:59.740209 containerd[1584]: 2026-01-16 21:11:59.356 [INFO][4327] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580.0.0-p-735bf5553b' Jan 16 21:11:59.740209 containerd[1584]: 2026-01-16 21:11:59.419 [INFO][4327] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c8a7d8727482cea65dcc0e307cb357883667bb027cf22a7d95ef49fcd352bc68" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.740209 containerd[1584]: 2026-01-16 21:11:59.449 [INFO][4327] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.740209 containerd[1584]: 2026-01-16 21:11:59.476 [INFO][4327] ipam/ipam.go 511: Trying affinity for 192.168.29.192/26 host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.740209 containerd[1584]: 2026-01-16 21:11:59.488 [INFO][4327] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.192/26 host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.740209 containerd[1584]: 2026-01-16 21:11:59.499 [INFO][4327] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.192/26 host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.741152 containerd[1584]: 2026-01-16 21:11:59.499 [INFO][4327] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.29.192/26 handle="k8s-pod-network.c8a7d8727482cea65dcc0e307cb357883667bb027cf22a7d95ef49fcd352bc68" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.741152 containerd[1584]: 2026-01-16 21:11:59.505 [INFO][4327] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c8a7d8727482cea65dcc0e307cb357883667bb027cf22a7d95ef49fcd352bc68 Jan 16 21:11:59.741152 containerd[1584]: 2026-01-16 21:11:59.520 [INFO][4327] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.29.192/26 handle="k8s-pod-network.c8a7d8727482cea65dcc0e307cb357883667bb027cf22a7d95ef49fcd352bc68" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.741152 containerd[1584]: 2026-01-16 21:11:59.546 [INFO][4327] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.29.198/26] block=192.168.29.192/26 handle="k8s-pod-network.c8a7d8727482cea65dcc0e307cb357883667bb027cf22a7d95ef49fcd352bc68" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.741152 containerd[1584]: 2026-01-16 21:11:59.546 [INFO][4327] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.198/26] handle="k8s-pod-network.c8a7d8727482cea65dcc0e307cb357883667bb027cf22a7d95ef49fcd352bc68" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.741152 containerd[1584]: 2026-01-16 21:11:59.546 [INFO][4327] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:11:59.741152 containerd[1584]: 2026-01-16 21:11:59.546 [INFO][4327] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.29.198/26] IPv6=[] ContainerID="c8a7d8727482cea65dcc0e307cb357883667bb027cf22a7d95ef49fcd352bc68" HandleID="k8s-pod-network.c8a7d8727482cea65dcc0e307cb357883667bb027cf22a7d95ef49fcd352bc68" Workload="ci--4580.0.0--p--735bf5553b-k8s-goldmane--666569f655--zcnrm-eth0" Jan 16 21:11:59.742095 containerd[1584]: 2026-01-16 21:11:59.603 [INFO][4288] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8a7d8727482cea65dcc0e307cb357883667bb027cf22a7d95ef49fcd352bc68" Namespace="calico-system" Pod="goldmane-666569f655-zcnrm" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-goldmane--666569f655--zcnrm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--735bf5553b-k8s-goldmane--666569f655--zcnrm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 11, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-735bf5553b", ContainerID:"", Pod:"goldmane-666569f655-zcnrm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia68a349e118", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:11:59.742418 containerd[1584]: 2026-01-16 21:11:59.603 [INFO][4288] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.198/32] ContainerID="c8a7d8727482cea65dcc0e307cb357883667bb027cf22a7d95ef49fcd352bc68" Namespace="calico-system" Pod="goldmane-666569f655-zcnrm" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-goldmane--666569f655--zcnrm-eth0" Jan 16 21:11:59.742418 containerd[1584]: 2026-01-16 21:11:59.603 [INFO][4288] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia68a349e118 ContainerID="c8a7d8727482cea65dcc0e307cb357883667bb027cf22a7d95ef49fcd352bc68" Namespace="calico-system" Pod="goldmane-666569f655-zcnrm" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-goldmane--666569f655--zcnrm-eth0" Jan 16 21:11:59.742418 containerd[1584]: 2026-01-16 21:11:59.657 [INFO][4288] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8a7d8727482cea65dcc0e307cb357883667bb027cf22a7d95ef49fcd352bc68" Namespace="calico-system" Pod="goldmane-666569f655-zcnrm" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-goldmane--666569f655--zcnrm-eth0" Jan 16 21:11:59.742578 containerd[1584]: 2026-01-16 21:11:59.661 [INFO][4288] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8a7d8727482cea65dcc0e307cb357883667bb027cf22a7d95ef49fcd352bc68" Namespace="calico-system" Pod="goldmane-666569f655-zcnrm" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-goldmane--666569f655--zcnrm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--735bf5553b-k8s-goldmane--666569f655--zcnrm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 11, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-735bf5553b", ContainerID:"c8a7d8727482cea65dcc0e307cb357883667bb027cf22a7d95ef49fcd352bc68", Pod:"goldmane-666569f655-zcnrm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia68a349e118", MAC:"3e:ad:b3:39:47:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:11:59.742680 containerd[1584]: 2026-01-16 21:11:59.690 [INFO][4288] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8a7d8727482cea65dcc0e307cb357883667bb027cf22a7d95ef49fcd352bc68" Namespace="calico-system" Pod="goldmane-666569f655-zcnrm" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-goldmane--666569f655--zcnrm-eth0" Jan 16 21:11:59.823025 systemd[1]: Started cri-containerd-4d6cdeedf98059da50a650e2ccc7f3235a11b931f760396e194ae05580dcae75.scope - libcontainer container 4d6cdeedf98059da50a650e2ccc7f3235a11b931f760396e194ae05580dcae75. Jan 16 21:11:59.854774 containerd[1584]: time="2026-01-16T21:11:59.853100819Z" level=info msg="connecting to shim c8a7d8727482cea65dcc0e307cb357883667bb027cf22a7d95ef49fcd352bc68" address="unix:///run/containerd/s/0903e41728bd315782a3e3a1ba11e875a6c5c99f730012ece574858d2f593756" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:11:59.867019 systemd-networkd[1484]: cali38c484fa597: Link UP Jan 16 21:11:59.882651 systemd-networkd[1484]: cali38c484fa597: Gained carrier Jan 16 21:11:59.928425 containerd[1584]: 2026-01-16 21:11:58.789 [INFO][4287] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 21:11:59.928425 containerd[1584]: 2026-01-16 21:11:58.830 [INFO][4287] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--snhm5-eth0 coredns-668d6bf9bc- kube-system c84a3cf3-ba8e-4381-80a9-376236ad5286 831 0 2026-01-16 21:11:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4580.0.0-p-735bf5553b coredns-668d6bf9bc-snhm5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali38c484fa597 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23" Namespace="kube-system" Pod="coredns-668d6bf9bc-snhm5" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--snhm5-" Jan 16 21:11:59.928425 containerd[1584]: 2026-01-16 21:11:58.830 [INFO][4287] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23" Namespace="kube-system" Pod="coredns-668d6bf9bc-snhm5" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--snhm5-eth0" Jan 16 21:11:59.928425 containerd[1584]: 2026-01-16 21:11:59.047 [INFO][4350] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23" HandleID="k8s-pod-network.99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23" Workload="ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--snhm5-eth0" Jan 16 21:11:59.929318 containerd[1584]: 2026-01-16 21:11:59.047 [INFO][4350] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23" HandleID="k8s-pod-network.99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23" Workload="ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--snhm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123d80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4580.0.0-p-735bf5553b", "pod":"coredns-668d6bf9bc-snhm5", "timestamp":"2026-01-16 21:11:59.047419705 +0000 UTC"}, Hostname:"ci-4580.0.0-p-735bf5553b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:11:59.929318 containerd[1584]: 2026-01-16 21:11:59.048 [INFO][4350] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:11:59.929318 containerd[1584]: 2026-01-16 21:11:59.546 [INFO][4350] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:11:59.929318 containerd[1584]: 2026-01-16 21:11:59.546 [INFO][4350] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580.0.0-p-735bf5553b' Jan 16 21:11:59.929318 containerd[1584]: 2026-01-16 21:11:59.660 [INFO][4350] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.929318 containerd[1584]: 2026-01-16 21:11:59.700 [INFO][4350] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.929318 containerd[1584]: 2026-01-16 21:11:59.752 [INFO][4350] ipam/ipam.go 511: Trying affinity for 192.168.29.192/26 host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.929318 containerd[1584]: 2026-01-16 21:11:59.764 [INFO][4350] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.192/26 host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.929318 containerd[1584]: 2026-01-16 21:11:59.770 [INFO][4350] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.192/26 host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.929700 containerd[1584]: 2026-01-16 21:11:59.770 [INFO][4350] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.29.192/26 handle="k8s-pod-network.99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.929700 containerd[1584]: 2026-01-16 21:11:59.776 [INFO][4350] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23 Jan 16 21:11:59.929700 containerd[1584]: 2026-01-16 21:11:59.801 [INFO][4350] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.29.192/26 handle="k8s-pod-network.99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.929700 containerd[1584]: 2026-01-16 21:11:59.823 [INFO][4350] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.29.199/26] block=192.168.29.192/26 handle="k8s-pod-network.99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.929700 containerd[1584]: 2026-01-16 21:11:59.827 [INFO][4350] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.199/26] handle="k8s-pod-network.99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:11:59.929700 containerd[1584]: 2026-01-16 21:11:59.827 [INFO][4350] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:11:59.929700 containerd[1584]: 2026-01-16 21:11:59.827 [INFO][4350] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.29.199/26] IPv6=[] ContainerID="99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23" HandleID="k8s-pod-network.99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23" Workload="ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--snhm5-eth0" Jan 16 21:11:59.931461 containerd[1584]: 2026-01-16 21:11:59.839 [INFO][4287] cni-plugin/k8s.go 418: Populated endpoint ContainerID="99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23" Namespace="kube-system" Pod="coredns-668d6bf9bc-snhm5" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--snhm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--snhm5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c84a3cf3-ba8e-4381-80a9-376236ad5286", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 11, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-735bf5553b", ContainerID:"", Pod:"coredns-668d6bf9bc-snhm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38c484fa597", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:11:59.931461 containerd[1584]: 2026-01-16 21:11:59.841 [INFO][4287] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.199/32] ContainerID="99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23" Namespace="kube-system" Pod="coredns-668d6bf9bc-snhm5" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--snhm5-eth0" Jan 16 21:11:59.931461 containerd[1584]: 2026-01-16 21:11:59.841 [INFO][4287] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38c484fa597 ContainerID="99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23" Namespace="kube-system" Pod="coredns-668d6bf9bc-snhm5" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--snhm5-eth0" Jan 16 21:11:59.931461 containerd[1584]: 2026-01-16 21:11:59.883 [INFO][4287] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23" Namespace="kube-system" Pod="coredns-668d6bf9bc-snhm5" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--snhm5-eth0" Jan 16 21:11:59.931461 containerd[1584]: 2026-01-16 21:11:59.886 [INFO][4287] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23" Namespace="kube-system" Pod="coredns-668d6bf9bc-snhm5" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--snhm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--snhm5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c84a3cf3-ba8e-4381-80a9-376236ad5286", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 11, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-735bf5553b", ContainerID:"99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23", Pod:"coredns-668d6bf9bc-snhm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38c484fa597", MAC:"0a:61:8d:8c:e2:ff", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:11:59.931461 containerd[1584]: 2026-01-16 21:11:59.916 [INFO][4287] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23" Namespace="kube-system" Pod="coredns-668d6bf9bc-snhm5" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--snhm5-eth0" Jan 16 21:11:59.987000 audit: BPF prog-id=193 op=LOAD Jan 16 21:11:59.988000 audit: BPF prog-id=194 op=LOAD Jan 16 21:11:59.988000 audit[4446]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=4435 pid=4446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:59.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464366364656564663938303539646135306136353065326363633766 Jan 16 21:11:59.989000 audit: BPF prog-id=194 op=UNLOAD Jan 16 21:11:59.989000 audit[4446]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4435 pid=4446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:59.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464366364656564663938303539646135306136353065326363633766 Jan 16 21:11:59.990000 audit: BPF prog-id=195 op=LOAD Jan 16 21:11:59.990000 audit[4446]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=4435 pid=4446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:59.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464366364656564663938303539646135306136353065326363633766 Jan 16 21:11:59.991000 audit: BPF prog-id=196 op=LOAD Jan 16 21:11:59.991000 audit[4446]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=4435 pid=4446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:59.991000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464366364656564663938303539646135306136353065326363633766 Jan 16 21:11:59.992000 audit: BPF prog-id=196 op=UNLOAD Jan 16 21:11:59.992000 audit[4446]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4435 pid=4446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:59.992000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464366364656564663938303539646135306136353065326363633766 Jan 16 21:11:59.992000 audit: BPF prog-id=195 op=UNLOAD Jan 16 21:11:59.992000 audit[4446]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4435 pid=4446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:59.992000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464366364656564663938303539646135306136353065326363633766 Jan 16 21:11:59.992000 audit: BPF prog-id=197 op=LOAD Jan 16 21:11:59.992000 audit[4446]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=4435 pid=4446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:11:59.992000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464366364656564663938303539646135306136353065326363633766 Jan 16 21:12:00.002315 systemd[1]: Started cri-containerd-c8a7d8727482cea65dcc0e307cb357883667bb027cf22a7d95ef49fcd352bc68.scope - libcontainer container c8a7d8727482cea65dcc0e307cb357883667bb027cf22a7d95ef49fcd352bc68. Jan 16 21:12:00.030802 containerd[1584]: time="2026-01-16T21:12:00.030523149Z" level=info msg="connecting to shim 99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23" address="unix:///run/containerd/s/7a5413565b2b8a6cb917e5de46dde7d361eb1214f6feda2e40959de3e193e8eb" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:12:00.069959 containerd[1584]: time="2026-01-16T21:12:00.069883419Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:12:00.072562 containerd[1584]: time="2026-01-16T21:12:00.072490088Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 21:12:00.075958 containerd[1584]: time="2026-01-16T21:12:00.072611977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 21:12:00.076090 kubelet[2777]: E0116 21:12:00.072881 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:12:00.076090 kubelet[2777]: E0116 21:12:00.073031 2777 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:12:00.076090 kubelet[2777]: E0116 21:12:00.075238 2777 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dctkk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fqdjm_calico-system(948ce78a-6a96-42a6-a8f0-360a2ec834df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 21:12:00.083987 containerd[1584]: time="2026-01-16T21:12:00.083905755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 21:12:00.107184 systemd[1]: Started cri-containerd-99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23.scope - libcontainer container 99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23. Jan 16 21:12:00.108000 audit: BPF prog-id=198 op=LOAD Jan 16 21:12:00.109000 audit: BPF prog-id=199 op=LOAD Jan 16 21:12:00.109000 audit[4497]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4479 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338613764383732373438326365613635646363306533303763623335 Jan 16 21:12:00.111000 audit: BPF prog-id=199 op=UNLOAD Jan 16 21:12:00.111000 audit[4497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4479 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338613764383732373438326365613635646363306533303763623335 Jan 16 21:12:00.111000 audit: BPF prog-id=200 op=LOAD Jan 16 21:12:00.111000 audit[4497]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4479 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338613764383732373438326365613635646363306533303763623335 Jan 16 21:12:00.111000 audit: BPF prog-id=201 op=LOAD Jan 16 21:12:00.111000 audit[4497]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4479 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338613764383732373438326365613635646363306533303763623335 Jan 16 21:12:00.111000 audit: BPF prog-id=201 op=UNLOAD Jan 16 21:12:00.111000 audit[4497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4479 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338613764383732373438326365613635646363306533303763623335 Jan 16 21:12:00.111000 audit: BPF prog-id=200 op=UNLOAD Jan 16 21:12:00.111000 audit[4497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4479 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338613764383732373438326365613635646363306533303763623335 Jan 16 21:12:00.111000 audit: BPF prog-id=202 op=LOAD Jan 16 21:12:00.111000 audit[4497]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4479 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338613764383732373438326365613635646363306533303763623335 Jan 16 21:12:00.136000 audit: BPF prog-id=203 op=LOAD Jan 16 21:12:00.137000 audit: BPF prog-id=204 op=LOAD Jan 16 21:12:00.137000 audit[4540]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4523 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939623539663731383733653461363962316435376561396266393736 Jan 16 21:12:00.139000 audit: BPF prog-id=204 op=UNLOAD Jan 16 21:12:00.139000 audit[4540]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4523 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.139000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939623539663731383733653461363962316435376561396266393736 Jan 16 21:12:00.139000 audit: BPF prog-id=205 op=LOAD Jan 16 21:12:00.139000 audit[4540]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4523 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.139000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939623539663731383733653461363962316435376561396266393736 Jan 16 21:12:00.140000 audit: BPF prog-id=206 op=LOAD Jan 16 21:12:00.140000 audit[4540]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4523 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.140000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939623539663731383733653461363962316435376561396266393736 Jan 16 21:12:00.140000 audit: BPF prog-id=206 op=UNLOAD Jan 16 21:12:00.140000 audit[4540]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4523 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.140000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939623539663731383733653461363962316435376561396266393736 Jan 16 21:12:00.140000 audit: BPF prog-id=205 op=UNLOAD Jan 16 21:12:00.140000 audit[4540]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4523 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.140000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939623539663731383733653461363962316435376561396266393736 Jan 16 21:12:00.140000 audit: BPF prog-id=207 op=LOAD Jan 16 21:12:00.140000 audit[4540]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4523 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.140000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939623539663731383733653461363962316435376561396266393736 Jan 16 21:12:00.200592 containerd[1584]: time="2026-01-16T21:12:00.200527764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88cb9dd67-54lnr,Uid:3c7766f8-3124-4eac-b0a1-e1f23a7c1e1f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4d6cdeedf98059da50a650e2ccc7f3235a11b931f760396e194ae05580dcae75\"" Jan 16 21:12:00.231774 kubelet[2777]: I0116 21:12:00.230986 2777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 21:12:00.241917 kubelet[2777]: E0116 21:12:00.241800 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:12:00.245683 containerd[1584]: time="2026-01-16T21:12:00.245625337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-snhm5,Uid:c84a3cf3-ba8e-4381-80a9-376236ad5286,Namespace:kube-system,Attempt:0,} returns sandbox id \"99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23\"" Jan 16 21:12:00.248373 kubelet[2777]: E0116 21:12:00.248223 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:12:00.254183 containerd[1584]: time="2026-01-16T21:12:00.254134370Z" level=info msg="CreateContainer within sandbox \"99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 21:12:00.277852 containerd[1584]: time="2026-01-16T21:12:00.277705558Z" level=info msg="Container b526b46453ca7ec1207f8ea48071247a2132fe88c982a6bdb753cb7078531622: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:12:00.289939 containerd[1584]: time="2026-01-16T21:12:00.289887767Z" level=info msg="CreateContainer within sandbox \"99b59f71873e4a69b1d57ea9bf9768bdf6663129067c4a4b943582cc842cec23\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b526b46453ca7ec1207f8ea48071247a2132fe88c982a6bdb753cb7078531622\"" Jan 16 21:12:00.292513 containerd[1584]: time="2026-01-16T21:12:00.292465889Z" level=info msg="StartContainer for \"b526b46453ca7ec1207f8ea48071247a2132fe88c982a6bdb753cb7078531622\"" Jan 16 21:12:00.295229 containerd[1584]: time="2026-01-16T21:12:00.295114114Z" level=info msg="connecting to shim b526b46453ca7ec1207f8ea48071247a2132fe88c982a6bdb753cb7078531622" address="unix:///run/containerd/s/7a5413565b2b8a6cb917e5de46dde7d361eb1214f6feda2e40959de3e193e8eb" protocol=ttrpc version=3 Jan 16 21:12:00.323138 systemd[1]: Started cri-containerd-b526b46453ca7ec1207f8ea48071247a2132fe88c982a6bdb753cb7078531622.scope - libcontainer container b526b46453ca7ec1207f8ea48071247a2132fe88c982a6bdb753cb7078531622. Jan 16 21:12:00.352000 audit: BPF prog-id=208 op=LOAD Jan 16 21:12:00.354000 audit: BPF prog-id=209 op=LOAD Jan 16 21:12:00.354000 audit[4580]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4523 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235323662343634353363613765633132303766386561343830373132 Jan 16 21:12:00.354000 audit: BPF prog-id=209 op=UNLOAD Jan 16 21:12:00.354000 audit[4580]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4523 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235323662343634353363613765633132303766386561343830373132 Jan 16 21:12:00.354000 audit: BPF prog-id=210 op=LOAD Jan 16 21:12:00.354000 audit[4580]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4523 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235323662343634353363613765633132303766386561343830373132 Jan 16 21:12:00.355000 audit: BPF prog-id=211 op=LOAD Jan 16 21:12:00.355000 audit[4580]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4523 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.355000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235323662343634353363613765633132303766386561343830373132 Jan 16 21:12:00.355000 audit: BPF prog-id=211 op=UNLOAD Jan 16 21:12:00.355000 audit[4580]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4523 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.355000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235323662343634353363613765633132303766386561343830373132 Jan 16 21:12:00.355000 audit: BPF prog-id=210 op=UNLOAD Jan 16 21:12:00.355000 audit[4580]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4523 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.355000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235323662343634353363613765633132303766386561343830373132 Jan 16 21:12:00.355000 audit: BPF prog-id=212 op=LOAD Jan 16 21:12:00.355000 audit[4580]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4523 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.355000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235323662343634353363613765633132303766386561343830373132 Jan 16 21:12:00.364000 audit[4600]: NETFILTER_CFG table=filter:123 family=2 entries=21 op=nft_register_rule pid=4600 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:12:00.364000 audit[4600]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe434593e0 a2=0 a3=7ffe434593cc items=0 ppid=2909 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.364000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:12:00.375000 audit[4600]: NETFILTER_CFG table=nat:124 family=2 entries=19 op=nft_register_chain pid=4600 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:12:00.375000 audit[4600]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe434593e0 a2=0 a3=7ffe434593cc items=0 ppid=2909 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.375000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:12:00.411316 containerd[1584]: time="2026-01-16T21:12:00.410488422Z" level=info msg="StartContainer for \"b526b46453ca7ec1207f8ea48071247a2132fe88c982a6bdb753cb7078531622\" returns successfully" Jan 16 21:12:00.419902 containerd[1584]: time="2026-01-16T21:12:00.419854909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zcnrm,Uid:3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8a7d8727482cea65dcc0e307cb357883667bb027cf22a7d95ef49fcd352bc68\"" Jan 16 21:12:00.451156 containerd[1584]: time="2026-01-16T21:12:00.451084701Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:12:00.452627 containerd[1584]: time="2026-01-16T21:12:00.452306078Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 21:12:00.452627 containerd[1584]: time="2026-01-16T21:12:00.452330856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 21:12:00.453004 kubelet[2777]: E0116 21:12:00.452959 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:12:00.453663 kubelet[2777]: E0116 21:12:00.453021 2777 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:12:00.453663 kubelet[2777]: E0116 21:12:00.453470 2777 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dctkk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fqdjm_calico-system(948ce78a-6a96-42a6-a8f0-360a2ec834df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 21:12:00.454597 containerd[1584]: time="2026-01-16T21:12:00.454015070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:12:00.455308 kubelet[2777]: E0116 21:12:00.455095 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fqdjm" podUID="948ce78a-6a96-42a6-a8f0-360a2ec834df" Jan 16 21:12:00.538213 kubelet[2777]: E0116 21:12:00.537851 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:12:00.539282 containerd[1584]: time="2026-01-16T21:12:00.539116843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wrnbt,Uid:19564b0c-668b-4551-9f0e-2c1106af1e44,Namespace:kube-system,Attempt:0,}" Jan 16 21:12:00.778463 systemd-networkd[1484]: calibc2d6fa3025: Link UP Jan 16 21:12:00.781337 systemd-networkd[1484]: calibc2d6fa3025: Gained carrier Jan 16 21:12:00.817433 containerd[1584]: 2026-01-16 21:12:00.619 [INFO][4625] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 21:12:00.817433 containerd[1584]: 2026-01-16 21:12:00.651 [INFO][4625] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--wrnbt-eth0 coredns-668d6bf9bc- kube-system 19564b0c-668b-4551-9f0e-2c1106af1e44 827 0 2026-01-16 21:11:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4580.0.0-p-735bf5553b coredns-668d6bf9bc-wrnbt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibc2d6fa3025 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0" Namespace="kube-system" Pod="coredns-668d6bf9bc-wrnbt" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--wrnbt-" Jan 16 21:12:00.817433 containerd[1584]: 2026-01-16 21:12:00.651 [INFO][4625] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0" Namespace="kube-system" Pod="coredns-668d6bf9bc-wrnbt" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--wrnbt-eth0" Jan 16 21:12:00.817433 containerd[1584]: 2026-01-16 21:12:00.712 [INFO][4638] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0" HandleID="k8s-pod-network.9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0" Workload="ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--wrnbt-eth0" Jan 16 21:12:00.817433 containerd[1584]: 2026-01-16 21:12:00.712 [INFO][4638] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0" HandleID="k8s-pod-network.9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0" Workload="ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--wrnbt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d51b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4580.0.0-p-735bf5553b", "pod":"coredns-668d6bf9bc-wrnbt", "timestamp":"2026-01-16 21:12:00.71239847 +0000 UTC"}, Hostname:"ci-4580.0.0-p-735bf5553b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:12:00.817433 containerd[1584]: 2026-01-16 21:12:00.712 [INFO][4638] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:12:00.817433 containerd[1584]: 2026-01-16 21:12:00.712 [INFO][4638] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:12:00.817433 containerd[1584]: 2026-01-16 21:12:00.713 [INFO][4638] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580.0.0-p-735bf5553b' Jan 16 21:12:00.817433 containerd[1584]: 2026-01-16 21:12:00.722 [INFO][4638] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:12:00.817433 containerd[1584]: 2026-01-16 21:12:00.728 [INFO][4638] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580.0.0-p-735bf5553b" Jan 16 21:12:00.817433 containerd[1584]: 2026-01-16 21:12:00.737 [INFO][4638] ipam/ipam.go 511: Trying affinity for 192.168.29.192/26 host="ci-4580.0.0-p-735bf5553b" Jan 16 21:12:00.817433 containerd[1584]: 2026-01-16 21:12:00.741 [INFO][4638] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.192/26 host="ci-4580.0.0-p-735bf5553b" Jan 16 21:12:00.817433 containerd[1584]: 2026-01-16 21:12:00.745 [INFO][4638] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.192/26 host="ci-4580.0.0-p-735bf5553b" Jan 16 21:12:00.817433 containerd[1584]: 2026-01-16 21:12:00.745 [INFO][4638] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.29.192/26 handle="k8s-pod-network.9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:12:00.817433 containerd[1584]: 2026-01-16 21:12:00.748 [INFO][4638] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0 Jan 16 21:12:00.817433 containerd[1584]: 2026-01-16 21:12:00.756 [INFO][4638] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.29.192/26 handle="k8s-pod-network.9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:12:00.817433 containerd[1584]: 2026-01-16 21:12:00.769 [INFO][4638] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.29.200/26] block=192.168.29.192/26 handle="k8s-pod-network.9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:12:00.817433 containerd[1584]: 2026-01-16 21:12:00.769 [INFO][4638] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.200/26] handle="k8s-pod-network.9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0" host="ci-4580.0.0-p-735bf5553b" Jan 16 21:12:00.817433 containerd[1584]: 2026-01-16 21:12:00.769 [INFO][4638] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:12:00.817433 containerd[1584]: 2026-01-16 21:12:00.769 [INFO][4638] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.29.200/26] IPv6=[] ContainerID="9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0" HandleID="k8s-pod-network.9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0" Workload="ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--wrnbt-eth0" Jan 16 21:12:00.818176 containerd[1584]: 2026-01-16 21:12:00.775 [INFO][4625] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0" Namespace="kube-system" Pod="coredns-668d6bf9bc-wrnbt" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--wrnbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--wrnbt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"19564b0c-668b-4551-9f0e-2c1106af1e44", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 11, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-735bf5553b", ContainerID:"", Pod:"coredns-668d6bf9bc-wrnbt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc2d6fa3025", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:12:00.818176 containerd[1584]: 2026-01-16 21:12:00.775 [INFO][4625] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.200/32] ContainerID="9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0" Namespace="kube-system" Pod="coredns-668d6bf9bc-wrnbt" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--wrnbt-eth0" Jan 16 21:12:00.818176 containerd[1584]: 2026-01-16 21:12:00.775 [INFO][4625] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc2d6fa3025 ContainerID="9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0" Namespace="kube-system" Pod="coredns-668d6bf9bc-wrnbt" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--wrnbt-eth0" Jan 16 21:12:00.818176 containerd[1584]: 2026-01-16 21:12:00.778 [INFO][4625] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0" Namespace="kube-system" Pod="coredns-668d6bf9bc-wrnbt" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--wrnbt-eth0" Jan 16 21:12:00.818176 containerd[1584]: 2026-01-16 21:12:00.781 [INFO][4625] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0" Namespace="kube-system" Pod="coredns-668d6bf9bc-wrnbt" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--wrnbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--wrnbt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"19564b0c-668b-4551-9f0e-2c1106af1e44", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 11, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-735bf5553b", ContainerID:"9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0", Pod:"coredns-668d6bf9bc-wrnbt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc2d6fa3025", MAC:"0e:14:15:d1:03:85", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:12:00.818176 containerd[1584]: 2026-01-16 21:12:00.811 [INFO][4625] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0" Namespace="kube-system" Pod="coredns-668d6bf9bc-wrnbt" WorkloadEndpoint="ci--4580.0.0--p--735bf5553b-k8s-coredns--668d6bf9bc--wrnbt-eth0" Jan 16 21:12:00.845707 containerd[1584]: time="2026-01-16T21:12:00.845512746Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:12:00.846596 containerd[1584]: time="2026-01-16T21:12:00.846443501Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:12:00.846596 containerd[1584]: time="2026-01-16T21:12:00.846552205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:12:00.847591 kubelet[2777]: E0116 21:12:00.847506 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:12:00.847591 kubelet[2777]: E0116 21:12:00.847561 2777 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:12:00.848981 kubelet[2777]: E0116 21:12:00.848920 2777 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwp8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-88cb9dd67-54lnr_calico-apiserver(3c7766f8-3124-4eac-b0a1-e1f23a7c1e1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:12:00.849177 containerd[1584]: time="2026-01-16T21:12:00.848667868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 21:12:00.850780 kubelet[2777]: E0116 21:12:00.850676 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-88cb9dd67-54lnr" podUID="3c7766f8-3124-4eac-b0a1-e1f23a7c1e1f" Jan 16 21:12:00.858442 containerd[1584]: time="2026-01-16T21:12:00.858328661Z" level=info msg="connecting to shim 9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0" address="unix:///run/containerd/s/00c4d37de59f700796d25844ecd97359c2d0ed38d4a4b901905e5253f210b2e2" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:12:00.907171 systemd[1]: Started cri-containerd-9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0.scope - libcontainer container 9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0. Jan 16 21:12:00.931000 audit: BPF prog-id=213 op=LOAD Jan 16 21:12:00.932000 audit: BPF prog-id=214 op=LOAD Jan 16 21:12:00.932000 audit[4672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4660 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.932000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965663739613836303538616630623139613262306561353763346239 Jan 16 21:12:00.934000 audit: BPF prog-id=214 op=UNLOAD Jan 16 21:12:00.934000 audit[4672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4660 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965663739613836303538616630623139613262306561353763346239 Jan 16 21:12:00.935000 audit: BPF prog-id=215 op=LOAD Jan 16 21:12:00.935000 audit[4672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4660 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965663739613836303538616630623139613262306561353763346239 Jan 16 21:12:00.935000 audit: BPF prog-id=216 op=LOAD Jan 16 21:12:00.935000 audit[4672]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4660 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965663739613836303538616630623139613262306561353763346239 Jan 16 21:12:00.935000 audit: BPF prog-id=216 op=UNLOAD Jan 16 21:12:00.935000 audit[4672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4660 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965663739613836303538616630623139613262306561353763346239 Jan 16 21:12:00.935000 audit: BPF prog-id=215 op=UNLOAD Jan 16 21:12:00.935000 audit[4672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4660 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965663739613836303538616630623139613262306561353763346239 Jan 16 21:12:00.935000 audit: BPF prog-id=217 op=LOAD Jan 16 21:12:00.935000 audit[4672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4660 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:00.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965663739613836303538616630623139613262306561353763346239 Jan 16 21:12:00.952380 kubelet[2777]: E0116 21:12:00.952344 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:12:00.958073 systemd-networkd[1484]: cali4cbf4ea1a60: Gained IPv6LL Jan 16 21:12:00.959566 kubelet[2777]: E0116 21:12:00.959114 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:12:00.963777 kubelet[2777]: E0116 21:12:00.963621 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-88cb9dd67-54lnr" podUID="3c7766f8-3124-4eac-b0a1-e1f23a7c1e1f" Jan 16 21:12:00.964874 kubelet[2777]: E0116 21:12:00.964669 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fqdjm" podUID="948ce78a-6a96-42a6-a8f0-360a2ec834df" Jan 16 21:12:01.035591 kubelet[2777]: I0116 21:12:01.035410 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-snhm5" podStartSLOduration=47.019895375 podStartE2EDuration="47.019895375s" podCreationTimestamp="2026-01-16 21:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 21:12:00.999082962 +0000 UTC m=+52.730054474" watchObservedRunningTime="2026-01-16 21:12:01.019895375 +0000 UTC m=+52.750866883" Jan 16 21:12:01.055404 containerd[1584]: time="2026-01-16T21:12:01.055359447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wrnbt,Uid:19564b0c-668b-4551-9f0e-2c1106af1e44,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0\"" Jan 16 21:12:01.058514 kubelet[2777]: E0116 21:12:01.058470 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:12:01.061969 containerd[1584]: time="2026-01-16T21:12:01.061922185Z" level=info msg="CreateContainer within sandbox \"9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 21:12:01.063000 audit[4700]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=4700 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:12:01.063000 audit[4700]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe5f9d7b80 a2=0 a3=7ffe5f9d7b6c items=0 ppid=2909 pid=4700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.063000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:12:01.068000 audit[4700]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=4700 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:12:01.068000 audit[4700]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe5f9d7b80 a2=0 a3=0 items=0 ppid=2909 pid=4700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.068000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:12:01.085382 containerd[1584]: time="2026-01-16T21:12:01.083040428Z" level=info msg="Container c2a050cfe2ce8d7cf1638d306da8964aee03a2a68ebbefb7a030908df49c2283: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:12:01.102368 containerd[1584]: time="2026-01-16T21:12:01.102276700Z" level=info msg="CreateContainer within sandbox \"9ef79a86058af0b19a2b0ea57c4b9448251823002d7ad581c46e1487dbc52fa0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c2a050cfe2ce8d7cf1638d306da8964aee03a2a68ebbefb7a030908df49c2283\"" Jan 16 21:12:01.105113 containerd[1584]: time="2026-01-16T21:12:01.105052919Z" level=info msg="StartContainer for \"c2a050cfe2ce8d7cf1638d306da8964aee03a2a68ebbefb7a030908df49c2283\"" Jan 16 21:12:01.106886 containerd[1584]: time="2026-01-16T21:12:01.106837032Z" level=info msg="connecting to shim c2a050cfe2ce8d7cf1638d306da8964aee03a2a68ebbefb7a030908df49c2283" address="unix:///run/containerd/s/00c4d37de59f700796d25844ecd97359c2d0ed38d4a4b901905e5253f210b2e2" protocol=ttrpc version=3 Jan 16 21:12:01.143087 systemd[1]: Started cri-containerd-c2a050cfe2ce8d7cf1638d306da8964aee03a2a68ebbefb7a030908df49c2283.scope - libcontainer container c2a050cfe2ce8d7cf1638d306da8964aee03a2a68ebbefb7a030908df49c2283. Jan 16 21:12:01.150162 systemd-networkd[1484]: cali5edf02bd74e: Gained IPv6LL Jan 16 21:12:01.194000 audit: BPF prog-id=218 op=LOAD Jan 16 21:12:01.196000 audit: BPF prog-id=219 op=LOAD Jan 16 21:12:01.196000 audit[4702]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4660 pid=4702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332613035306366653263653864376366313633386433303664613839 Jan 16 21:12:01.196000 audit: BPF prog-id=219 op=UNLOAD Jan 16 21:12:01.196000 audit[4702]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4660 pid=4702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332613035306366653263653864376366313633386433303664613839 Jan 16 21:12:01.197000 audit: BPF prog-id=220 op=LOAD Jan 16 21:12:01.197000 audit[4702]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4660 pid=4702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332613035306366653263653864376366313633386433303664613839 Jan 16 21:12:01.198000 audit: BPF prog-id=221 op=LOAD Jan 16 21:12:01.198000 audit[4702]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4660 pid=4702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332613035306366653263653864376366313633386433303664613839 Jan 16 21:12:01.198000 audit: BPF prog-id=221 op=UNLOAD Jan 16 21:12:01.198000 audit[4702]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4660 pid=4702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332613035306366653263653864376366313633386433303664613839 Jan 16 21:12:01.198000 audit: BPF prog-id=220 op=UNLOAD Jan 16 21:12:01.198000 audit[4702]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4660 pid=4702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332613035306366653263653864376366313633386433303664613839 Jan 16 21:12:01.198000 audit: BPF prog-id=222 op=LOAD Jan 16 21:12:01.198000 audit[4702]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4660 pid=4702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332613035306366653263653864376366313633386433303664613839 Jan 16 21:12:01.234703 containerd[1584]: time="2026-01-16T21:12:01.234639159Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:12:01.237905 containerd[1584]: time="2026-01-16T21:12:01.237777241Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 21:12:01.237905 containerd[1584]: time="2026-01-16T21:12:01.237796614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 21:12:01.238959 kubelet[2777]: E0116 21:12:01.238905 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:12:01.239872 kubelet[2777]: E0116 21:12:01.238973 2777 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:12:01.239872 kubelet[2777]: E0116 21:12:01.239205 2777 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6g75m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zcnrm_calico-system(3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 21:12:01.240917 kubelet[2777]: E0116 21:12:01.240861 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zcnrm" podUID="3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529" Jan 16 21:12:01.256554 containerd[1584]: time="2026-01-16T21:12:01.256484644Z" level=info msg="StartContainer for \"c2a050cfe2ce8d7cf1638d306da8964aee03a2a68ebbefb7a030908df49c2283\" returns successfully" Jan 16 21:12:01.278301 systemd-networkd[1484]: cali38c484fa597: Gained IPv6LL Jan 16 21:12:01.599129 systemd-networkd[1484]: calia68a349e118: Gained IPv6LL Jan 16 21:12:01.649000 audit: BPF prog-id=223 op=LOAD Jan 16 21:12:01.649000 audit[4780]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd27d98890 a2=98 a3=1fffffffffffffff items=0 ppid=4731 pid=4780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.649000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 21:12:01.650000 audit: BPF prog-id=223 op=UNLOAD Jan 16 21:12:01.650000 audit[4780]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd27d98860 a3=0 items=0 ppid=4731 pid=4780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.650000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 21:12:01.650000 audit: BPF prog-id=224 op=LOAD Jan 16 21:12:01.650000 audit[4780]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd27d98770 a2=94 a3=3 items=0 ppid=4731 pid=4780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.650000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 21:12:01.650000 audit: BPF prog-id=224 op=UNLOAD Jan 16 21:12:01.650000 audit[4780]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd27d98770 a2=94 a3=3 items=0 ppid=4731 pid=4780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.650000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 21:12:01.650000 audit: BPF prog-id=225 op=LOAD Jan 16 21:12:01.650000 audit[4780]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd27d987b0 a2=94 a3=7ffd27d98990 items=0 ppid=4731 pid=4780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.650000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 21:12:01.650000 audit: BPF prog-id=225 op=UNLOAD Jan 16 21:12:01.650000 audit[4780]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd27d987b0 a2=94 a3=7ffd27d98990 items=0 ppid=4731 pid=4780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.650000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 21:12:01.653000 audit: BPF prog-id=226 op=LOAD Jan 16 21:12:01.653000 audit[4781]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6c96faf0 a2=98 a3=3 items=0 ppid=4731 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.653000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:12:01.653000 audit: BPF prog-id=226 op=UNLOAD Jan 16 21:12:01.653000 audit[4781]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc6c96fac0 a3=0 items=0 ppid=4731 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.653000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:12:01.654000 audit: BPF prog-id=227 op=LOAD Jan 16 21:12:01.654000 audit[4781]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc6c96f8e0 a2=94 a3=54428f items=0 ppid=4731 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.654000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:12:01.655000 audit: BPF prog-id=227 op=UNLOAD Jan 16 21:12:01.655000 audit[4781]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc6c96f8e0 a2=94 a3=54428f items=0 ppid=4731 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.655000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:12:01.655000 audit: BPF prog-id=228 op=LOAD Jan 16 21:12:01.655000 audit[4781]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc6c96f910 a2=94 a3=2 items=0 ppid=4731 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.655000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:12:01.655000 audit: BPF prog-id=228 op=UNLOAD Jan 16 21:12:01.655000 audit[4781]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc6c96f910 a2=0 a3=2 items=0 ppid=4731 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.655000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:12:01.882000 audit: BPF prog-id=229 op=LOAD Jan 16 21:12:01.882000 audit[4781]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc6c96f7d0 a2=94 a3=1 items=0 ppid=4731 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.882000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:12:01.882000 audit: BPF prog-id=229 op=UNLOAD Jan 16 21:12:01.882000 audit[4781]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc6c96f7d0 a2=94 a3=1 items=0 ppid=4731 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.882000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:12:01.900000 audit: BPF prog-id=230 op=LOAD Jan 16 21:12:01.900000 audit[4781]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc6c96f7c0 a2=94 a3=4 items=0 ppid=4731 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.900000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:12:01.900000 audit: BPF prog-id=230 op=UNLOAD Jan 16 21:12:01.900000 audit[4781]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc6c96f7c0 a2=0 a3=4 items=0 ppid=4731 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.900000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:12:01.900000 audit: BPF prog-id=231 op=LOAD Jan 16 21:12:01.900000 audit[4781]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc6c96f620 a2=94 a3=5 items=0 ppid=4731 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.900000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:12:01.900000 audit: BPF prog-id=231 op=UNLOAD Jan 16 21:12:01.900000 audit[4781]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc6c96f620 a2=0 a3=5 items=0 ppid=4731 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.900000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:12:01.901000 audit: BPF prog-id=232 op=LOAD Jan 16 21:12:01.901000 audit[4781]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc6c96f840 a2=94 a3=6 items=0 ppid=4731 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.901000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:12:01.901000 audit: BPF prog-id=232 op=UNLOAD Jan 16 21:12:01.901000 audit[4781]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc6c96f840 a2=0 a3=6 items=0 ppid=4731 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.901000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:12:01.901000 audit: BPF prog-id=233 op=LOAD Jan 16 21:12:01.901000 audit[4781]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc6c96eff0 a2=94 a3=88 items=0 ppid=4731 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.901000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:12:01.901000 audit: BPF prog-id=234 op=LOAD Jan 16 21:12:01.901000 audit[4781]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc6c96ee70 a2=94 a3=2 items=0 ppid=4731 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.901000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:12:01.901000 audit: BPF prog-id=234 op=UNLOAD Jan 16 21:12:01.901000 audit[4781]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc6c96eea0 a2=0 a3=7ffc6c96efa0 items=0 ppid=4731 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.901000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:12:01.902000 audit: BPF prog-id=233 op=UNLOAD Jan 16 21:12:01.902000 audit[4781]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=18deed10 a2=0 a3=1220c1bc8788331c items=0 ppid=4731 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.902000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:12:01.921000 audit: BPF prog-id=235 op=LOAD Jan 16 21:12:01.921000 audit[4784]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffce4e43340 a2=98 a3=1999999999999999 items=0 ppid=4731 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.921000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 21:12:01.922000 audit: BPF prog-id=235 op=UNLOAD Jan 16 21:12:01.922000 audit[4784]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffce4e43310 a3=0 items=0 ppid=4731 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.922000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 21:12:01.922000 audit: BPF prog-id=236 op=LOAD Jan 16 21:12:01.922000 audit[4784]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffce4e43220 a2=94 a3=ffff items=0 ppid=4731 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.922000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 21:12:01.922000 audit: BPF prog-id=236 op=UNLOAD Jan 16 21:12:01.922000 audit[4784]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffce4e43220 a2=94 a3=ffff items=0 ppid=4731 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.922000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 21:12:01.922000 audit: BPF prog-id=237 op=LOAD Jan 16 21:12:01.922000 audit[4784]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffce4e43260 a2=94 a3=7ffce4e43440 items=0 ppid=4731 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.922000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 21:12:01.922000 audit: BPF prog-id=237 op=UNLOAD Jan 16 21:12:01.922000 audit[4784]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffce4e43260 a2=94 a3=7ffce4e43440 items=0 ppid=4731 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:01.922000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 21:12:01.965768 kubelet[2777]: E0116 21:12:01.965664 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:12:01.971465 kubelet[2777]: E0116 21:12:01.967423 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:12:01.971832 kubelet[2777]: E0116 21:12:01.969467 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-88cb9dd67-54lnr" podUID="3c7766f8-3124-4eac-b0a1-e1f23a7c1e1f" Jan 16 21:12:01.972134 kubelet[2777]: E0116 21:12:01.969534 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zcnrm" podUID="3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529" Jan 16 21:12:02.044717 kubelet[2777]: I0116 21:12:02.044577 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wrnbt" podStartSLOduration=48.044547018 podStartE2EDuration="48.044547018s" podCreationTimestamp="2026-01-16 21:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 21:12:02.042253456 +0000 UTC m=+53.773224971" watchObservedRunningTime="2026-01-16 21:12:02.044547018 +0000 UTC m=+53.775518555" Jan 16 21:12:02.089000 audit[4797]: NETFILTER_CFG table=filter:127 family=2 entries=17 op=nft_register_rule pid=4797 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:12:02.089000 audit[4797]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe07a9fa60 a2=0 a3=7ffe07a9fa4c items=0 ppid=2909 pid=4797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.089000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:12:02.093000 audit[4797]: NETFILTER_CFG table=nat:128 family=2 entries=35 op=nft_register_chain pid=4797 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:12:02.093000 audit[4797]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe07a9fa60 a2=0 a3=7ffe07a9fa4c items=0 ppid=2909 pid=4797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.093000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:12:02.159093 systemd-networkd[1484]: vxlan.calico: Link UP Jan 16 21:12:02.159107 systemd-networkd[1484]: vxlan.calico: Gained carrier Jan 16 21:12:02.210000 audit: BPF prog-id=238 op=LOAD Jan 16 21:12:02.210000 audit[4812]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffce852a200 a2=98 a3=0 items=0 ppid=4731 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.210000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:12:02.211000 audit: BPF prog-id=238 op=UNLOAD Jan 16 21:12:02.211000 audit[4812]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffce852a1d0 a3=0 items=0 ppid=4731 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.211000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:12:02.211000 audit: BPF prog-id=239 op=LOAD Jan 16 21:12:02.211000 audit[4812]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffce852a010 a2=94 a3=54428f items=0 ppid=4731 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.211000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:12:02.211000 audit: BPF prog-id=239 op=UNLOAD Jan 16 21:12:02.211000 audit[4812]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffce852a010 a2=94 a3=54428f items=0 ppid=4731 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.211000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:12:02.211000 audit: BPF prog-id=240 op=LOAD Jan 16 21:12:02.211000 audit[4812]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffce852a040 a2=94 a3=2 items=0 ppid=4731 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.211000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:12:02.212000 audit: BPF prog-id=240 op=UNLOAD Jan 16 21:12:02.212000 audit[4812]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffce852a040 a2=0 a3=2 items=0 ppid=4731 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.212000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:12:02.212000 audit: BPF prog-id=241 op=LOAD Jan 16 21:12:02.212000 audit[4812]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffce8529df0 a2=94 a3=4 items=0 ppid=4731 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.212000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:12:02.212000 audit: BPF prog-id=241 op=UNLOAD Jan 16 21:12:02.212000 audit[4812]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffce8529df0 a2=94 a3=4 items=0 ppid=4731 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.212000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:12:02.212000 audit: BPF prog-id=242 op=LOAD Jan 16 21:12:02.212000 audit[4812]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffce8529ef0 a2=94 a3=7ffce852a070 items=0 ppid=4731 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.212000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:12:02.212000 audit: BPF prog-id=242 op=UNLOAD Jan 16 21:12:02.212000 audit[4812]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffce8529ef0 a2=0 a3=7ffce852a070 items=0 ppid=4731 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.212000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:12:02.215000 audit: BPF prog-id=243 op=LOAD Jan 16 21:12:02.215000 audit[4812]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffce8529620 a2=94 a3=2 items=0 ppid=4731 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.215000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:12:02.217000 audit: BPF prog-id=243 op=UNLOAD Jan 16 21:12:02.217000 audit[4812]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffce8529620 a2=0 a3=2 items=0 ppid=4731 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.217000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:12:02.217000 audit: BPF prog-id=244 op=LOAD Jan 16 21:12:02.217000 audit[4812]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffce8529720 a2=94 a3=30 items=0 ppid=4731 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.217000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:12:02.231000 audit: BPF prog-id=245 op=LOAD Jan 16 21:12:02.231000 audit[4817]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6ea238a0 a2=98 a3=0 items=0 ppid=4731 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.231000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:12:02.232000 audit: BPF prog-id=245 op=UNLOAD Jan 16 21:12:02.232000 audit[4817]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc6ea23870 a3=0 items=0 ppid=4731 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.232000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:12:02.232000 audit: BPF prog-id=246 op=LOAD Jan 16 21:12:02.232000 audit[4817]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc6ea23690 a2=94 a3=54428f items=0 ppid=4731 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.232000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:12:02.233000 audit: BPF prog-id=246 op=UNLOAD Jan 16 21:12:02.233000 audit[4817]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc6ea23690 a2=94 a3=54428f items=0 ppid=4731 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.233000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:12:02.233000 audit: BPF prog-id=247 op=LOAD Jan 16 21:12:02.233000 audit[4817]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc6ea236c0 a2=94 a3=2 items=0 ppid=4731 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.233000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:12:02.233000 audit: BPF prog-id=247 op=UNLOAD Jan 16 21:12:02.233000 audit[4817]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc6ea236c0 a2=0 a3=2 items=0 ppid=4731 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.233000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:12:02.468000 audit: BPF prog-id=248 op=LOAD Jan 16 21:12:02.468000 audit[4817]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc6ea23580 a2=94 a3=1 items=0 ppid=4731 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.468000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:12:02.468000 audit: BPF prog-id=248 op=UNLOAD Jan 16 21:12:02.468000 audit[4817]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc6ea23580 a2=94 a3=1 items=0 ppid=4731 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.468000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:12:02.481000 audit: BPF prog-id=249 op=LOAD Jan 16 21:12:02.481000 audit[4817]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc6ea23570 a2=94 a3=4 items=0 ppid=4731 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.481000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:12:02.481000 audit: BPF prog-id=249 op=UNLOAD Jan 16 21:12:02.481000 audit[4817]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc6ea23570 a2=0 a3=4 items=0 ppid=4731 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.481000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:12:02.481000 audit: BPF prog-id=250 op=LOAD Jan 16 21:12:02.481000 audit[4817]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc6ea233d0 a2=94 a3=5 items=0 ppid=4731 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.481000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:12:02.481000 audit: BPF prog-id=250 op=UNLOAD Jan 16 21:12:02.481000 audit[4817]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc6ea233d0 a2=0 a3=5 items=0 ppid=4731 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.481000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:12:02.481000 audit: BPF prog-id=251 op=LOAD Jan 16 21:12:02.481000 audit[4817]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc6ea235f0 a2=94 a3=6 items=0 ppid=4731 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.481000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:12:02.481000 audit: BPF prog-id=251 op=UNLOAD Jan 16 21:12:02.481000 audit[4817]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc6ea235f0 a2=0 a3=6 items=0 ppid=4731 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.481000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:12:02.482000 audit: BPF prog-id=252 op=LOAD Jan 16 21:12:02.482000 audit[4817]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc6ea22da0 a2=94 a3=88 items=0 ppid=4731 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.482000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:12:02.482000 audit: BPF prog-id=253 op=LOAD Jan 16 21:12:02.482000 audit[4817]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc6ea22c20 a2=94 a3=2 items=0 ppid=4731 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.482000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:12:02.482000 audit: BPF prog-id=253 op=UNLOAD Jan 16 21:12:02.482000 audit[4817]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc6ea22c50 a2=0 a3=7ffc6ea22d50 items=0 ppid=4731 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.482000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:12:02.482000 audit: BPF prog-id=252 op=UNLOAD Jan 16 21:12:02.482000 audit[4817]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1a25cd10 a2=0 a3=fb91f012f07bde52 items=0 ppid=4731 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.482000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:12:02.492000 audit: BPF prog-id=244 op=UNLOAD Jan 16 21:12:02.492000 audit[4731]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0004ec2c0 a2=0 a3=0 items=0 ppid=3954 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.492000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 16 21:12:02.619000 audit[4845]: NETFILTER_CFG table=nat:129 family=2 entries=15 op=nft_register_chain pid=4845 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:12:02.619000 audit[4845]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc79174af0 a2=0 a3=7ffc79174adc items=0 ppid=4731 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.619000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:12:02.638000 audit[4849]: NETFILTER_CFG table=mangle:130 family=2 entries=16 op=nft_register_chain pid=4849 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:12:02.638000 audit[4849]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffdeec86c80 a2=0 a3=7ffdeec86c6c items=0 ppid=4731 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.638000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:12:02.648000 audit[4847]: NETFILTER_CFG table=raw:131 family=2 entries=21 op=nft_register_chain pid=4847 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:12:02.648000 audit[4847]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fffe3fcf470 a2=0 a3=7fffe3fcf45c items=0 ppid=4731 pid=4847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.648000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:12:02.656000 audit[4848]: NETFILTER_CFG table=filter:132 family=2 entries=321 op=nft_register_chain pid=4848 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:12:02.656000 audit[4848]: SYSCALL arch=c000003e syscall=46 success=yes exit=190616 a0=3 a1=7fff175d1000 a2=0 a3=7fff175d0fec items=0 ppid=4731 pid=4848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:02.656000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:12:02.750054 systemd-networkd[1484]: calibc2d6fa3025: Gained IPv6LL Jan 16 21:12:02.968571 kubelet[2777]: E0116 21:12:02.968072 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:12:02.968571 kubelet[2777]: E0116 21:12:02.968299 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:12:03.119000 audit[4862]: NETFILTER_CFG table=filter:133 family=2 entries=14 op=nft_register_rule pid=4862 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:12:03.119000 audit[4862]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffffda73af0 a2=0 a3=7ffffda73adc items=0 ppid=2909 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:03.119000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:12:03.137000 audit[4862]: NETFILTER_CFG table=nat:134 family=2 entries=56 op=nft_register_chain pid=4862 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:12:03.137000 audit[4862]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffffda73af0 a2=0 a3=7ffffda73adc items=0 ppid=2909 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:03.137000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:12:04.093946 systemd-networkd[1484]: vxlan.calico: Gained IPv6LL Jan 16 21:12:09.532723 containerd[1584]: time="2026-01-16T21:12:09.532047641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:12:09.916148 containerd[1584]: time="2026-01-16T21:12:09.915921983Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:12:09.919192 containerd[1584]: time="2026-01-16T21:12:09.917798505Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:12:09.919192 containerd[1584]: time="2026-01-16T21:12:09.917935320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:12:09.919767 kubelet[2777]: E0116 21:12:09.919685 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:12:09.919767 kubelet[2777]: E0116 21:12:09.919764 2777 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:12:09.922542 kubelet[2777]: E0116 21:12:09.920362 2777 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4h9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-88cb9dd67-rsm8g_calico-apiserver(98509e25-dadb-4a53-8355-8cb0c0d71e14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:12:09.922542 kubelet[2777]: E0116 21:12:09.921560 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-88cb9dd67-rsm8g" podUID="98509e25-dadb-4a53-8355-8cb0c0d71e14" Jan 16 21:12:09.932710 systemd[1]: Started sshd@9-137.184.190.135:22-68.220.241.50:58870.service - OpenSSH per-connection server daemon (68.220.241.50:58870). Jan 16 21:12:09.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-137.184.190.135:22-68.220.241.50:58870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:09.936509 kernel: kauditd_printk_skb: 366 callbacks suppressed Jan 16 21:12:09.937437 kernel: audit: type=1130 audit(1768597929.933:735): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-137.184.190.135:22-68.220.241.50:58870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:10.389000 audit[4879]: USER_ACCT pid=4879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:10.392122 sshd[4879]: Accepted publickey for core from 68.220.241.50 port 58870 ssh2: RSA SHA256:0TIfaCMFjZ+DZLKyAY8AqXCIwfcgirSh3KulVUQk9aI Jan 16 21:12:10.395903 sshd-session[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:12:10.396855 kernel: audit: type=1101 audit(1768597930.389:736): pid=4879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:10.392000 audit[4879]: CRED_ACQ pid=4879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:10.403784 kernel: audit: type=1103 audit(1768597930.392:737): pid=4879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:10.403909 kernel: audit: type=1006 audit(1768597930.392:738): pid=4879 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 16 21:12:10.392000 audit[4879]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc52e70a0 a2=3 a3=0 items=0 ppid=1 pid=4879 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:10.408194 kernel: audit: type=1300 audit(1768597930.392:738): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc52e70a0 a2=3 a3=0 items=0 ppid=1 pid=4879 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:10.392000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:12:10.414597 kernel: audit: type=1327 audit(1768597930.392:738): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:12:10.419942 systemd-logind[1560]: New session 9 of user core. Jan 16 21:12:10.425035 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 16 21:12:10.428000 audit[4879]: USER_START pid=4879 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:10.436805 kernel: audit: type=1105 audit(1768597930.428:739): pid=4879 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:10.435000 audit[4884]: CRED_ACQ pid=4884 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:10.444802 kernel: audit: type=1103 audit(1768597930.435:740): pid=4884 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:11.127883 sshd[4884]: Connection closed by 68.220.241.50 port 58870 Jan 16 21:12:11.130214 sshd-session[4879]: pam_unix(sshd:session): session closed for user core Jan 16 21:12:11.133000 audit[4879]: USER_END pid=4879 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:11.142653 systemd[1]: sshd@9-137.184.190.135:22-68.220.241.50:58870.service: Deactivated successfully. Jan 16 21:12:11.143295 kernel: audit: type=1106 audit(1768597931.133:741): pid=4879 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:11.145284 systemd[1]: session-9.scope: Deactivated successfully. Jan 16 21:12:11.133000 audit[4879]: CRED_DISP pid=4879 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:11.150860 kernel: audit: type=1104 audit(1768597931.133:742): pid=4879 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:11.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-137.184.190.135:22-68.220.241.50:58870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:11.151246 systemd-logind[1560]: Session 9 logged out. Waiting for processes to exit. Jan 16 21:12:11.154306 systemd-logind[1560]: Removed session 9. Jan 16 21:12:13.535491 containerd[1584]: time="2026-01-16T21:12:13.535435242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 21:12:13.856913 containerd[1584]: time="2026-01-16T21:12:13.856644612Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:12:13.858453 containerd[1584]: time="2026-01-16T21:12:13.858363954Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 21:12:13.858640 containerd[1584]: time="2026-01-16T21:12:13.858375881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 21:12:13.859155 kubelet[2777]: E0116 21:12:13.858797 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:12:13.859155 kubelet[2777]: E0116 21:12:13.858856 2777 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:12:13.859155 kubelet[2777]: E0116 21:12:13.858994 2777 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9fe08925ae42432c8b9a6a7a9c1f1d0e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wvb9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8d65f9d5-lfhf5_calico-system(23273946-e0fa-48ec-9cd8-e5c3fd7332d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 21:12:13.863013 containerd[1584]: time="2026-01-16T21:12:13.862970054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 21:12:14.194286 containerd[1584]: time="2026-01-16T21:12:14.194206049Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:12:14.195541 containerd[1584]: time="2026-01-16T21:12:14.195466825Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 21:12:14.195679 containerd[1584]: time="2026-01-16T21:12:14.195594996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 21:12:14.196010 kubelet[2777]: E0116 21:12:14.195952 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:12:14.196699 kubelet[2777]: E0116 21:12:14.196022 2777 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:12:14.196699 kubelet[2777]: E0116 21:12:14.196203 2777 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wvb9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8d65f9d5-lfhf5_calico-system(23273946-e0fa-48ec-9cd8-e5c3fd7332d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 21:12:14.198644 kubelet[2777]: E0116 21:12:14.197945 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8d65f9d5-lfhf5" podUID="23273946-e0fa-48ec-9cd8-e5c3fd7332d3" Jan 16 21:12:14.535350 containerd[1584]: time="2026-01-16T21:12:14.534673040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 21:12:14.880594 containerd[1584]: time="2026-01-16T21:12:14.880266140Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:12:14.881522 containerd[1584]: time="2026-01-16T21:12:14.881422264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 21:12:14.881661 containerd[1584]: time="2026-01-16T21:12:14.881494444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 21:12:14.882068 kubelet[2777]: E0116 21:12:14.881971 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:12:14.882068 kubelet[2777]: E0116 21:12:14.882045 2777 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:12:14.882763 containerd[1584]: time="2026-01-16T21:12:14.882686463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 21:12:14.884135 kubelet[2777]: E0116 21:12:14.883559 2777 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mzr5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-67cf49786f-th49d_calico-system(4a350016-b4b1-4c4d-a81e-6fa230a1b42f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 21:12:14.885199 kubelet[2777]: E0116 21:12:14.885141 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67cf49786f-th49d" podUID="4a350016-b4b1-4c4d-a81e-6fa230a1b42f" Jan 16 21:12:15.228906 containerd[1584]: time="2026-01-16T21:12:15.228775051Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:12:15.230558 containerd[1584]: time="2026-01-16T21:12:15.230096138Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 21:12:15.230558 containerd[1584]: time="2026-01-16T21:12:15.230250584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 21:12:15.230824 kubelet[2777]: E0116 21:12:15.230715 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:12:15.230890 kubelet[2777]: E0116 21:12:15.230854 2777 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:12:15.231621 kubelet[2777]: E0116 21:12:15.231242 2777 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6g75m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zcnrm_calico-system(3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 21:12:15.232801 kubelet[2777]: E0116 21:12:15.232725 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zcnrm" podUID="3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529" Jan 16 21:12:15.533006 containerd[1584]: time="2026-01-16T21:12:15.532826344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:12:15.848505 containerd[1584]: time="2026-01-16T21:12:15.848182784Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:12:15.850080 containerd[1584]: time="2026-01-16T21:12:15.849892015Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:12:15.850080 containerd[1584]: time="2026-01-16T21:12:15.850033310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:12:15.850586 kubelet[2777]: E0116 21:12:15.850539 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:12:15.852462 kubelet[2777]: E0116 21:12:15.850717 2777 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:12:15.852462 kubelet[2777]: E0116 21:12:15.850951 2777 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwp8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-88cb9dd67-54lnr_calico-apiserver(3c7766f8-3124-4eac-b0a1-e1f23a7c1e1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:12:15.852807 kubelet[2777]: E0116 21:12:15.852682 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-88cb9dd67-54lnr" podUID="3c7766f8-3124-4eac-b0a1-e1f23a7c1e1f" Jan 16 21:12:16.201000 systemd[1]: Started sshd@10-137.184.190.135:22-68.220.241.50:54502.service - OpenSSH per-connection server daemon (68.220.241.50:54502). Jan 16 21:12:16.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-137.184.190.135:22-68.220.241.50:54502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:16.203500 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:12:16.203662 kernel: audit: type=1130 audit(1768597936.199:744): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-137.184.190.135:22-68.220.241.50:54502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:16.537941 containerd[1584]: time="2026-01-16T21:12:16.536646741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 21:12:16.584000 audit[4907]: USER_ACCT pid=4907 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:16.587312 sshd[4907]: Accepted publickey for core from 68.220.241.50 port 54502 ssh2: RSA SHA256:0TIfaCMFjZ+DZLKyAY8AqXCIwfcgirSh3KulVUQk9aI Jan 16 21:12:16.592865 kernel: audit: type=1101 audit(1768597936.584:745): pid=4907 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:16.595000 audit[4907]: CRED_ACQ pid=4907 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:16.599250 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:12:16.603682 kernel: audit: type=1103 audit(1768597936.595:746): pid=4907 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:16.604080 kernel: audit: type=1006 audit(1768597936.595:747): pid=4907 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 16 21:12:16.595000 audit[4907]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd8ee931c0 a2=3 a3=0 items=0 ppid=1 pid=4907 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:16.608282 kernel: audit: type=1300 audit(1768597936.595:747): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd8ee931c0 a2=3 a3=0 items=0 ppid=1 pid=4907 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:16.595000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:12:16.613840 kernel: audit: type=1327 audit(1768597936.595:747): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:12:16.619858 systemd-logind[1560]: New session 10 of user core. Jan 16 21:12:16.631137 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 16 21:12:16.635000 audit[4907]: USER_START pid=4907 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:16.638000 audit[4911]: CRED_ACQ pid=4911 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:16.646801 kernel: audit: type=1105 audit(1768597936.635:748): pid=4907 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:16.646959 kernel: audit: type=1103 audit(1768597936.638:749): pid=4911 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:16.856928 containerd[1584]: time="2026-01-16T21:12:16.856515971Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:12:16.869108 containerd[1584]: time="2026-01-16T21:12:16.868898280Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 21:12:16.869108 containerd[1584]: time="2026-01-16T21:12:16.869050480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 21:12:16.870185 kubelet[2777]: E0116 21:12:16.869924 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:12:16.870185 kubelet[2777]: E0116 21:12:16.870007 2777 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:12:16.871502 kubelet[2777]: E0116 21:12:16.871141 2777 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dctkk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fqdjm_calico-system(948ce78a-6a96-42a6-a8f0-360a2ec834df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 21:12:16.879122 containerd[1584]: time="2026-01-16T21:12:16.879061504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 21:12:16.977128 sshd[4911]: Connection closed by 68.220.241.50 port 54502 Jan 16 21:12:16.978042 sshd-session[4907]: pam_unix(sshd:session): session closed for user core Jan 16 21:12:16.978000 audit[4907]: USER_END pid=4907 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:16.984779 systemd[1]: sshd@10-137.184.190.135:22-68.220.241.50:54502.service: Deactivated successfully. Jan 16 21:12:16.988334 systemd[1]: session-10.scope: Deactivated successfully. Jan 16 21:12:16.989048 kernel: audit: type=1106 audit(1768597936.978:750): pid=4907 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:16.978000 audit[4907]: CRED_DISP pid=4907 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:16.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-137.184.190.135:22-68.220.241.50:54502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:16.995962 kernel: audit: type=1104 audit(1768597936.978:751): pid=4907 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:16.996183 systemd-logind[1560]: Session 10 logged out. Waiting for processes to exit. Jan 16 21:12:16.998809 systemd-logind[1560]: Removed session 10. Jan 16 21:12:17.233929 containerd[1584]: time="2026-01-16T21:12:17.233626519Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:12:17.235102 containerd[1584]: time="2026-01-16T21:12:17.235006112Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 21:12:17.235309 containerd[1584]: time="2026-01-16T21:12:17.235021484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 21:12:17.235485 kubelet[2777]: E0116 21:12:17.235419 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:12:17.235585 kubelet[2777]: E0116 21:12:17.235497 2777 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:12:17.235799 kubelet[2777]: E0116 21:12:17.235644 2777 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dctkk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fqdjm_calico-system(948ce78a-6a96-42a6-a8f0-360a2ec834df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 21:12:17.237602 kubelet[2777]: E0116 21:12:17.237523 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fqdjm" podUID="948ce78a-6a96-42a6-a8f0-360a2ec834df" Jan 16 21:12:22.056164 systemd[1]: Started sshd@11-137.184.190.135:22-68.220.241.50:54516.service - OpenSSH per-connection server daemon (68.220.241.50:54516). Jan 16 21:12:22.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-137.184.190.135:22-68.220.241.50:54516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:22.059761 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:12:22.059915 kernel: audit: type=1130 audit(1768597942.054:753): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-137.184.190.135:22-68.220.241.50:54516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:22.446000 audit[4925]: USER_ACCT pid=4925 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:22.452068 sshd[4925]: Accepted publickey for core from 68.220.241.50 port 54516 ssh2: RSA SHA256:0TIfaCMFjZ+DZLKyAY8AqXCIwfcgirSh3KulVUQk9aI Jan 16 21:12:22.454044 sshd-session[4925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:12:22.449000 audit[4925]: CRED_ACQ pid=4925 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:22.457176 kernel: audit: type=1101 audit(1768597942.446:754): pid=4925 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:22.457279 kernel: audit: type=1103 audit(1768597942.449:755): pid=4925 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:22.449000 audit[4925]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3e199c40 a2=3 a3=0 items=0 ppid=1 pid=4925 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:22.466788 systemd-logind[1560]: New session 11 of user core. Jan 16 21:12:22.469893 kernel: audit: type=1006 audit(1768597942.449:756): pid=4925 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 16 21:12:22.470030 kernel: audit: type=1300 audit(1768597942.449:756): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3e199c40 a2=3 a3=0 items=0 ppid=1 pid=4925 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:22.449000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:12:22.473163 kernel: audit: type=1327 audit(1768597942.449:756): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:12:22.476132 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 16 21:12:22.480000 audit[4925]: USER_START pid=4925 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:22.483000 audit[4929]: CRED_ACQ pid=4929 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:22.492368 kernel: audit: type=1105 audit(1768597942.480:757): pid=4925 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:22.492529 kernel: audit: type=1103 audit(1768597942.483:758): pid=4929 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:22.760221 sshd[4929]: Connection closed by 68.220.241.50 port 54516 Jan 16 21:12:22.760048 sshd-session[4925]: pam_unix(sshd:session): session closed for user core Jan 16 21:12:22.761000 audit[4925]: USER_END pid=4925 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:22.769005 systemd-logind[1560]: Session 11 logged out. Waiting for processes to exit. Jan 16 21:12:22.769921 systemd[1]: sshd@11-137.184.190.135:22-68.220.241.50:54516.service: Deactivated successfully. Jan 16 21:12:22.771770 kernel: audit: type=1106 audit(1768597942.761:759): pid=4925 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:22.761000 audit[4925]: CRED_DISP pid=4925 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:22.777819 kernel: audit: type=1104 audit(1768597942.761:760): pid=4925 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:22.773244 systemd[1]: session-11.scope: Deactivated successfully. Jan 16 21:12:22.777257 systemd-logind[1560]: Removed session 11. Jan 16 21:12:22.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-137.184.190.135:22-68.220.241.50:54516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:22.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-137.184.190.135:22-68.220.241.50:51438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:22.851289 systemd[1]: Started sshd@12-137.184.190.135:22-68.220.241.50:51438.service - OpenSSH per-connection server daemon (68.220.241.50:51438). Jan 16 21:12:23.250000 audit[4949]: USER_ACCT pid=4949 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:23.251633 sshd[4949]: Accepted publickey for core from 68.220.241.50 port 51438 ssh2: RSA SHA256:0TIfaCMFjZ+DZLKyAY8AqXCIwfcgirSh3KulVUQk9aI Jan 16 21:12:23.251000 audit[4949]: CRED_ACQ pid=4949 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:23.251000 audit[4949]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc20cb5cd0 a2=3 a3=0 items=0 ppid=1 pid=4949 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:23.251000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:12:23.254266 sshd-session[4949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:12:23.262248 systemd-logind[1560]: New session 12 of user core. Jan 16 21:12:23.266044 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 16 21:12:23.270000 audit[4949]: USER_START pid=4949 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:23.273000 audit[4953]: CRED_ACQ pid=4953 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:23.594433 sshd[4953]: Connection closed by 68.220.241.50 port 51438 Jan 16 21:12:23.598324 sshd-session[4949]: pam_unix(sshd:session): session closed for user core Jan 16 21:12:23.599000 audit[4949]: USER_END pid=4949 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:23.599000 audit[4949]: CRED_DISP pid=4949 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:23.604423 systemd[1]: sshd@12-137.184.190.135:22-68.220.241.50:51438.service: Deactivated successfully. Jan 16 21:12:23.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-137.184.190.135:22-68.220.241.50:51438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:23.608509 systemd[1]: session-12.scope: Deactivated successfully. Jan 16 21:12:23.611309 systemd-logind[1560]: Session 12 logged out. Waiting for processes to exit. Jan 16 21:12:23.613204 systemd-logind[1560]: Removed session 12. Jan 16 21:12:23.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-137.184.190.135:22-68.220.241.50:51448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:23.670111 systemd[1]: Started sshd@13-137.184.190.135:22-68.220.241.50:51448.service - OpenSSH per-connection server daemon (68.220.241.50:51448). Jan 16 21:12:24.060000 audit[4963]: USER_ACCT pid=4963 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:24.062038 sshd[4963]: Accepted publickey for core from 68.220.241.50 port 51448 ssh2: RSA SHA256:0TIfaCMFjZ+DZLKyAY8AqXCIwfcgirSh3KulVUQk9aI Jan 16 21:12:24.064000 audit[4963]: CRED_ACQ pid=4963 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:24.064000 audit[4963]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc07e21e0 a2=3 a3=0 items=0 ppid=1 pid=4963 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:24.064000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:12:24.067118 sshd-session[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:12:24.076150 systemd-logind[1560]: New session 13 of user core. Jan 16 21:12:24.081048 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 16 21:12:24.086000 audit[4963]: USER_START pid=4963 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:24.090000 audit[4967]: CRED_ACQ pid=4967 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:24.394659 sshd[4967]: Connection closed by 68.220.241.50 port 51448 Jan 16 21:12:24.394946 sshd-session[4963]: pam_unix(sshd:session): session closed for user core Jan 16 21:12:24.396000 audit[4963]: USER_END pid=4963 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:24.396000 audit[4963]: CRED_DISP pid=4963 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:24.400419 systemd[1]: sshd@13-137.184.190.135:22-68.220.241.50:51448.service: Deactivated successfully. Jan 16 21:12:24.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-137.184.190.135:22-68.220.241.50:51448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:24.404164 systemd[1]: session-13.scope: Deactivated successfully. Jan 16 21:12:24.407431 systemd-logind[1560]: Session 13 logged out. Waiting for processes to exit. Jan 16 21:12:24.409916 systemd-logind[1560]: Removed session 13. Jan 16 21:12:24.535160 kubelet[2777]: E0116 21:12:24.535078 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-88cb9dd67-rsm8g" podUID="98509e25-dadb-4a53-8355-8cb0c0d71e14" Jan 16 21:12:24.538571 kubelet[2777]: E0116 21:12:24.538381 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:12:24.987443 kubelet[2777]: E0116 21:12:24.987386 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:12:26.533763 kubelet[2777]: E0116 21:12:26.533427 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-88cb9dd67-54lnr" podUID="3c7766f8-3124-4eac-b0a1-e1f23a7c1e1f" Jan 16 21:12:27.533923 kubelet[2777]: E0116 21:12:27.533693 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8d65f9d5-lfhf5" podUID="23273946-e0fa-48ec-9cd8-e5c3fd7332d3" Jan 16 21:12:28.535833 kubelet[2777]: E0116 21:12:28.535600 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fqdjm" podUID="948ce78a-6a96-42a6-a8f0-360a2ec834df" Jan 16 21:12:29.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-137.184.190.135:22-68.220.241.50:51450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:29.474471 systemd[1]: Started sshd@14-137.184.190.135:22-68.220.241.50:51450.service - OpenSSH per-connection server daemon (68.220.241.50:51450). Jan 16 21:12:29.475856 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 16 21:12:29.475930 kernel: audit: type=1130 audit(1768597949.473:780): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-137.184.190.135:22-68.220.241.50:51450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:29.533893 kubelet[2777]: E0116 21:12:29.533807 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zcnrm" podUID="3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529" Jan 16 21:12:29.534284 kubelet[2777]: E0116 21:12:29.533953 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67cf49786f-th49d" podUID="4a350016-b4b1-4c4d-a81e-6fa230a1b42f" Jan 16 21:12:29.858000 audit[5010]: USER_ACCT pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:29.860130 sshd[5010]: Accepted publickey for core from 68.220.241.50 port 51450 ssh2: RSA SHA256:0TIfaCMFjZ+DZLKyAY8AqXCIwfcgirSh3KulVUQk9aI Jan 16 21:12:29.866882 kernel: audit: type=1101 audit(1768597949.858:781): pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:29.867000 audit[5010]: CRED_ACQ pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:29.870903 sshd-session[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:12:29.875833 kernel: audit: type=1103 audit(1768597949.867:782): pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:29.876848 kernel: audit: type=1006 audit(1768597949.867:783): pid=5010 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 16 21:12:29.867000 audit[5010]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcce6d6770 a2=3 a3=0 items=0 ppid=1 pid=5010 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:29.889823 kernel: audit: type=1300 audit(1768597949.867:783): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcce6d6770 a2=3 a3=0 items=0 ppid=1 pid=5010 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:29.867000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:12:29.893910 kernel: audit: type=1327 audit(1768597949.867:783): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:12:29.899283 systemd-logind[1560]: New session 14 of user core. Jan 16 21:12:29.907042 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 16 21:12:29.912000 audit[5010]: USER_START pid=5010 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:29.923784 kernel: audit: type=1105 audit(1768597949.912:784): pid=5010 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:29.924807 kernel: audit: type=1103 audit(1768597949.923:785): pid=5017 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:29.923000 audit[5017]: CRED_ACQ pid=5017 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:30.161910 sshd[5017]: Connection closed by 68.220.241.50 port 51450 Jan 16 21:12:30.160281 sshd-session[5010]: pam_unix(sshd:session): session closed for user core Jan 16 21:12:30.165000 audit[5010]: USER_END pid=5010 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:30.176864 kernel: audit: type=1106 audit(1768597950.165:786): pid=5010 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:30.165000 audit[5010]: CRED_DISP pid=5010 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:30.178269 systemd[1]: sshd@14-137.184.190.135:22-68.220.241.50:51450.service: Deactivated successfully. Jan 16 21:12:30.182348 systemd[1]: session-14.scope: Deactivated successfully. Jan 16 21:12:30.184811 kernel: audit: type=1104 audit(1768597950.165:787): pid=5010 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:30.185569 systemd-logind[1560]: Session 14 logged out. Waiting for processes to exit. Jan 16 21:12:30.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-137.184.190.135:22-68.220.241.50:51450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:30.188642 systemd-logind[1560]: Removed session 14. Jan 16 21:12:35.250299 systemd[1]: Started sshd@15-137.184.190.135:22-68.220.241.50:58330.service - OpenSSH per-connection server daemon (68.220.241.50:58330). Jan 16 21:12:35.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-137.184.190.135:22-68.220.241.50:58330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:35.252433 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:12:35.252524 kernel: audit: type=1130 audit(1768597955.249:789): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-137.184.190.135:22-68.220.241.50:58330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:35.533044 containerd[1584]: time="2026-01-16T21:12:35.532914317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:12:35.716000 audit[5029]: USER_ACCT pid=5029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:35.723931 sshd[5029]: Accepted publickey for core from 68.220.241.50 port 58330 ssh2: RSA SHA256:0TIfaCMFjZ+DZLKyAY8AqXCIwfcgirSh3KulVUQk9aI Jan 16 21:12:35.724352 kernel: audit: type=1101 audit(1768597955.716:790): pid=5029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:35.724000 audit[5029]: CRED_ACQ pid=5029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:35.732615 sshd-session[5029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:12:35.734085 kernel: audit: type=1103 audit(1768597955.724:791): pid=5029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:35.742821 kernel: audit: type=1006 audit(1768597955.724:792): pid=5029 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 16 21:12:35.724000 audit[5029]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe47ec6c20 a2=3 a3=0 items=0 ppid=1 pid=5029 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:35.749772 kernel: audit: type=1300 audit(1768597955.724:792): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe47ec6c20 a2=3 a3=0 items=0 ppid=1 pid=5029 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:35.750672 systemd-logind[1560]: New session 15 of user core. Jan 16 21:12:35.751001 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 16 21:12:35.724000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:12:35.758782 kernel: audit: type=1327 audit(1768597955.724:792): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:12:35.756000 audit[5029]: USER_START pid=5029 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:35.768795 kernel: audit: type=1105 audit(1768597955.756:793): pid=5029 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:35.759000 audit[5033]: CRED_ACQ pid=5033 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:35.778770 kernel: audit: type=1103 audit(1768597955.759:794): pid=5033 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:35.850990 containerd[1584]: time="2026-01-16T21:12:35.850825928Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:12:35.855147 containerd[1584]: time="2026-01-16T21:12:35.854997402Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:12:35.855838 containerd[1584]: time="2026-01-16T21:12:35.855065722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:12:35.855971 kubelet[2777]: E0116 21:12:35.855591 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:12:35.855971 kubelet[2777]: E0116 21:12:35.855650 2777 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:12:35.858991 kubelet[2777]: E0116 21:12:35.857525 2777 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4h9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-88cb9dd67-rsm8g_calico-apiserver(98509e25-dadb-4a53-8355-8cb0c0d71e14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:12:35.861631 kubelet[2777]: E0116 21:12:35.861571 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-88cb9dd67-rsm8g" podUID="98509e25-dadb-4a53-8355-8cb0c0d71e14" Jan 16 21:12:36.109619 sshd[5033]: Connection closed by 68.220.241.50 port 58330 Jan 16 21:12:36.110533 sshd-session[5029]: pam_unix(sshd:session): session closed for user core Jan 16 21:12:36.111000 audit[5029]: USER_END pid=5029 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:36.116105 systemd-logind[1560]: Session 15 logged out. Waiting for processes to exit. Jan 16 21:12:36.118598 systemd[1]: sshd@15-137.184.190.135:22-68.220.241.50:58330.service: Deactivated successfully. Jan 16 21:12:36.119785 kernel: audit: type=1106 audit(1768597956.111:795): pid=5029 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:36.111000 audit[5029]: CRED_DISP pid=5029 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:36.122670 systemd[1]: session-15.scope: Deactivated successfully. Jan 16 21:12:36.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-137.184.190.135:22-68.220.241.50:58330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:36.126339 kernel: audit: type=1104 audit(1768597956.111:796): pid=5029 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:36.127621 systemd-logind[1560]: Removed session 15. Jan 16 21:12:38.538084 containerd[1584]: time="2026-01-16T21:12:38.538025018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 21:12:38.882833 containerd[1584]: time="2026-01-16T21:12:38.882424833Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:12:38.884218 containerd[1584]: time="2026-01-16T21:12:38.884148838Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 21:12:38.884595 containerd[1584]: time="2026-01-16T21:12:38.884193814Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 21:12:38.885480 kubelet[2777]: E0116 21:12:38.884882 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:12:38.885480 kubelet[2777]: E0116 21:12:38.884960 2777 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:12:38.885480 kubelet[2777]: E0116 21:12:38.885225 2777 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9fe08925ae42432c8b9a6a7a9c1f1d0e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wvb9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8d65f9d5-lfhf5_calico-system(23273946-e0fa-48ec-9cd8-e5c3fd7332d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 21:12:38.886874 containerd[1584]: time="2026-01-16T21:12:38.886493156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:12:39.233374 containerd[1584]: time="2026-01-16T21:12:39.233095822Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:12:39.234900 containerd[1584]: time="2026-01-16T21:12:39.234701553Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:12:39.235619 containerd[1584]: time="2026-01-16T21:12:39.234756113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:12:39.235680 kubelet[2777]: E0116 21:12:39.235284 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:12:39.235680 kubelet[2777]: E0116 21:12:39.235351 2777 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:12:39.236163 kubelet[2777]: E0116 21:12:39.235878 2777 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwp8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-88cb9dd67-54lnr_calico-apiserver(3c7766f8-3124-4eac-b0a1-e1f23a7c1e1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:12:39.236330 containerd[1584]: time="2026-01-16T21:12:39.236222589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 21:12:39.237419 kubelet[2777]: E0116 21:12:39.237315 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-88cb9dd67-54lnr" podUID="3c7766f8-3124-4eac-b0a1-e1f23a7c1e1f" Jan 16 21:12:39.559691 containerd[1584]: time="2026-01-16T21:12:39.559238914Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:12:39.561065 containerd[1584]: time="2026-01-16T21:12:39.560862542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 21:12:39.561065 containerd[1584]: time="2026-01-16T21:12:39.560929067Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 21:12:39.561262 kubelet[2777]: E0116 21:12:39.561215 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:12:39.561342 kubelet[2777]: E0116 21:12:39.561270 2777 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:12:39.561488 kubelet[2777]: E0116 21:12:39.561415 2777 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wvb9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8d65f9d5-lfhf5_calico-system(23273946-e0fa-48ec-9cd8-e5c3fd7332d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 21:12:39.562872 kubelet[2777]: E0116 21:12:39.562807 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8d65f9d5-lfhf5" podUID="23273946-e0fa-48ec-9cd8-e5c3fd7332d3" Jan 16 21:12:40.541296 containerd[1584]: time="2026-01-16T21:12:40.541239499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 21:12:41.171994 containerd[1584]: time="2026-01-16T21:12:41.171898524Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:12:41.173159 containerd[1584]: time="2026-01-16T21:12:41.173049334Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 21:12:41.173159 containerd[1584]: time="2026-01-16T21:12:41.173110676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 21:12:41.173588 kubelet[2777]: E0116 21:12:41.173499 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:12:41.175788 kubelet[2777]: E0116 21:12:41.173623 2777 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:12:41.175788 kubelet[2777]: E0116 21:12:41.173890 2777 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mzr5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-67cf49786f-th49d_calico-system(4a350016-b4b1-4c4d-a81e-6fa230a1b42f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 21:12:41.175788 kubelet[2777]: E0116 21:12:41.175381 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67cf49786f-th49d" podUID="4a350016-b4b1-4c4d-a81e-6fa230a1b42f" Jan 16 21:12:41.189689 systemd[1]: Started sshd@16-137.184.190.135:22-68.220.241.50:58334.service - OpenSSH per-connection server daemon (68.220.241.50:58334). Jan 16 21:12:41.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-137.184.190.135:22-68.220.241.50:58334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:41.191380 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:12:41.191458 kernel: audit: type=1130 audit(1768597961.189:798): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-137.184.190.135:22-68.220.241.50:58334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:41.608000 audit[5045]: USER_ACCT pid=5045 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:41.612064 sshd[5045]: Accepted publickey for core from 68.220.241.50 port 58334 ssh2: RSA SHA256:0TIfaCMFjZ+DZLKyAY8AqXCIwfcgirSh3KulVUQk9aI Jan 16 21:12:41.613000 audit[5045]: CRED_ACQ pid=5045 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:41.616065 sshd-session[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:12:41.617683 kernel: audit: type=1101 audit(1768597961.608:799): pid=5045 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:41.617783 kernel: audit: type=1103 audit(1768597961.613:800): pid=5045 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:41.623165 kernel: audit: type=1006 audit(1768597961.613:801): pid=5045 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 16 21:12:41.613000 audit[5045]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcfdf566f0 a2=3 a3=0 items=0 ppid=1 pid=5045 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:41.627815 kernel: audit: type=1300 audit(1768597961.613:801): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcfdf566f0 a2=3 a3=0 items=0 ppid=1 pid=5045 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:41.629769 systemd-logind[1560]: New session 16 of user core. Jan 16 21:12:41.633447 kernel: audit: type=1327 audit(1768597961.613:801): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:12:41.613000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:12:41.640049 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 16 21:12:41.646000 audit[5045]: USER_START pid=5045 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:41.649000 audit[5049]: CRED_ACQ pid=5049 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:41.656083 kernel: audit: type=1105 audit(1768597961.646:802): pid=5045 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:41.656143 kernel: audit: type=1103 audit(1768597961.649:803): pid=5049 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:41.916942 sshd[5049]: Connection closed by 68.220.241.50 port 58334 Jan 16 21:12:41.917917 sshd-session[5045]: pam_unix(sshd:session): session closed for user core Jan 16 21:12:41.923000 audit[5045]: USER_END pid=5045 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:41.928278 systemd[1]: sshd@16-137.184.190.135:22-68.220.241.50:58334.service: Deactivated successfully. Jan 16 21:12:41.932793 kernel: audit: type=1106 audit(1768597961.923:804): pid=5045 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:41.932925 kernel: audit: type=1104 audit(1768597961.923:805): pid=5045 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:41.923000 audit[5045]: CRED_DISP pid=5045 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:41.933167 systemd[1]: session-16.scope: Deactivated successfully. Jan 16 21:12:41.937727 systemd-logind[1560]: Session 16 logged out. Waiting for processes to exit. Jan 16 21:12:41.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-137.184.190.135:22-68.220.241.50:58334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:41.941901 systemd-logind[1560]: Removed session 16. Jan 16 21:12:41.997965 systemd[1]: Started sshd@17-137.184.190.135:22-68.220.241.50:58344.service - OpenSSH per-connection server daemon (68.220.241.50:58344). Jan 16 21:12:41.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-137.184.190.135:22-68.220.241.50:58344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:42.406000 audit[5060]: USER_ACCT pid=5060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:42.408814 sshd[5060]: Accepted publickey for core from 68.220.241.50 port 58344 ssh2: RSA SHA256:0TIfaCMFjZ+DZLKyAY8AqXCIwfcgirSh3KulVUQk9aI Jan 16 21:12:42.408000 audit[5060]: CRED_ACQ pid=5060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:42.408000 audit[5060]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7e08bac0 a2=3 a3=0 items=0 ppid=1 pid=5060 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:42.408000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:12:42.410801 sshd-session[5060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:12:42.419799 systemd-logind[1560]: New session 17 of user core. Jan 16 21:12:42.424101 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 16 21:12:42.427000 audit[5060]: USER_START pid=5060 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:42.430000 audit[5064]: CRED_ACQ pid=5064 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:42.883657 sshd[5064]: Connection closed by 68.220.241.50 port 58344 Jan 16 21:12:42.887044 sshd-session[5060]: pam_unix(sshd:session): session closed for user core Jan 16 21:12:42.891000 audit[5060]: USER_END pid=5060 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:42.893000 audit[5060]: CRED_DISP pid=5060 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:42.897369 systemd[1]: sshd@17-137.184.190.135:22-68.220.241.50:58344.service: Deactivated successfully. Jan 16 21:12:42.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-137.184.190.135:22-68.220.241.50:58344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:42.902295 systemd[1]: session-17.scope: Deactivated successfully. Jan 16 21:12:42.906197 systemd-logind[1560]: Session 17 logged out. Waiting for processes to exit. Jan 16 21:12:42.908934 systemd-logind[1560]: Removed session 17. Jan 16 21:12:42.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-137.184.190.135:22-68.220.241.50:36492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:42.959895 systemd[1]: Started sshd@18-137.184.190.135:22-68.220.241.50:36492.service - OpenSSH per-connection server daemon (68.220.241.50:36492). Jan 16 21:12:43.379000 audit[5082]: USER_ACCT pid=5082 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:43.380756 sshd[5082]: Accepted publickey for core from 68.220.241.50 port 36492 ssh2: RSA SHA256:0TIfaCMFjZ+DZLKyAY8AqXCIwfcgirSh3KulVUQk9aI Jan 16 21:12:43.381000 audit[5082]: CRED_ACQ pid=5082 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:43.381000 audit[5082]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe194c8fe0 a2=3 a3=0 items=0 ppid=1 pid=5082 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:43.381000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:12:43.383497 sshd-session[5082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:12:43.391452 systemd-logind[1560]: New session 18 of user core. Jan 16 21:12:43.400086 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 16 21:12:43.404000 audit[5082]: USER_START pid=5082 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:43.407000 audit[5086]: CRED_ACQ pid=5086 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:43.535022 containerd[1584]: time="2026-01-16T21:12:43.534809871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 21:12:43.540329 kubelet[2777]: E0116 21:12:43.540288 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:12:43.869463 containerd[1584]: time="2026-01-16T21:12:43.869233650Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:12:43.871213 containerd[1584]: time="2026-01-16T21:12:43.871144027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 21:12:43.871379 containerd[1584]: time="2026-01-16T21:12:43.871195502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 21:12:43.871859 kubelet[2777]: E0116 21:12:43.871798 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:12:43.872578 kubelet[2777]: E0116 21:12:43.871875 2777 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:12:43.872578 kubelet[2777]: E0116 21:12:43.872227 2777 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dctkk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fqdjm_calico-system(948ce78a-6a96-42a6-a8f0-360a2ec834df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 21:12:43.872984 containerd[1584]: time="2026-01-16T21:12:43.872296271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 21:12:44.244880 containerd[1584]: time="2026-01-16T21:12:44.244797060Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:12:44.246689 containerd[1584]: time="2026-01-16T21:12:44.246284390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 21:12:44.246689 containerd[1584]: time="2026-01-16T21:12:44.246349961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 21:12:44.246918 kubelet[2777]: E0116 21:12:44.246661 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:12:44.246918 kubelet[2777]: E0116 21:12:44.246766 2777 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:12:44.249055 kubelet[2777]: E0116 21:12:44.247532 2777 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6g75m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zcnrm_calico-system(3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 21:12:44.249357 containerd[1584]: time="2026-01-16T21:12:44.248104187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 21:12:44.250964 kubelet[2777]: E0116 21:12:44.250879 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zcnrm" podUID="3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529" Jan 16 21:12:44.354000 audit[5098]: NETFILTER_CFG table=filter:135 family=2 entries=14 op=nft_register_rule pid=5098 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:12:44.354000 audit[5098]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffce6d80f10 a2=0 a3=7ffce6d80efc items=0 ppid=2909 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:44.354000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:12:44.366000 audit[5098]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=5098 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:12:44.366000 audit[5098]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffce6d80f10 a2=0 a3=7ffce6d80efc items=0 ppid=2909 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:44.366000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:12:44.408000 audit[5100]: NETFILTER_CFG table=filter:137 family=2 entries=26 op=nft_register_rule pid=5100 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:12:44.408000 audit[5100]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe3e82ae30 a2=0 a3=7ffe3e82ae1c items=0 ppid=2909 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:44.408000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:12:44.423000 audit[5100]: NETFILTER_CFG table=nat:138 family=2 entries=20 op=nft_register_rule pid=5100 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:12:44.428787 sshd[5086]: Connection closed by 68.220.241.50 port 36492 Jan 16 21:12:44.429806 sshd-session[5082]: pam_unix(sshd:session): session closed for user core Jan 16 21:12:44.432000 audit[5082]: USER_END pid=5082 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:44.432000 audit[5082]: CRED_DISP pid=5082 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:44.423000 audit[5100]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe3e82ae30 a2=0 a3=0 items=0 ppid=2909 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:44.423000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:12:44.441945 systemd[1]: sshd@18-137.184.190.135:22-68.220.241.50:36492.service: Deactivated successfully. Jan 16 21:12:44.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-137.184.190.135:22-68.220.241.50:36492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:44.450976 systemd[1]: session-18.scope: Deactivated successfully. Jan 16 21:12:44.455464 systemd-logind[1560]: Session 18 logged out. Waiting for processes to exit. Jan 16 21:12:44.459248 systemd-logind[1560]: Removed session 18. Jan 16 21:12:44.504784 systemd[1]: Started sshd@19-137.184.190.135:22-68.220.241.50:36508.service - OpenSSH per-connection server daemon (68.220.241.50:36508). Jan 16 21:12:44.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-137.184.190.135:22-68.220.241.50:36508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:44.544569 kubelet[2777]: E0116 21:12:44.544384 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:12:44.547583 kubelet[2777]: E0116 21:12:44.547545 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:12:44.578027 containerd[1584]: time="2026-01-16T21:12:44.577972627Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:12:44.582255 containerd[1584]: time="2026-01-16T21:12:44.581688790Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 21:12:44.583952 containerd[1584]: time="2026-01-16T21:12:44.582890415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 21:12:44.585000 kubelet[2777]: E0116 21:12:44.584930 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:12:44.585987 kubelet[2777]: E0116 21:12:44.585491 2777 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:12:44.587233 kubelet[2777]: E0116 21:12:44.587040 2777 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dctkk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fqdjm_calico-system(948ce78a-6a96-42a6-a8f0-360a2ec834df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 21:12:44.589150 kubelet[2777]: E0116 21:12:44.588947 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fqdjm" podUID="948ce78a-6a96-42a6-a8f0-360a2ec834df" Jan 16 21:12:44.917000 audit[5105]: USER_ACCT pid=5105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:44.919548 sshd[5105]: Accepted publickey for core from 68.220.241.50 port 36508 ssh2: RSA SHA256:0TIfaCMFjZ+DZLKyAY8AqXCIwfcgirSh3KulVUQk9aI Jan 16 21:12:44.919000 audit[5105]: CRED_ACQ pid=5105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:44.919000 audit[5105]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeaee31010 a2=3 a3=0 items=0 ppid=1 pid=5105 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:44.919000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:12:44.922542 sshd-session[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:12:44.931220 systemd-logind[1560]: New session 19 of user core. Jan 16 21:12:44.939217 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 16 21:12:44.948000 audit[5105]: USER_START pid=5105 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:44.952000 audit[5109]: CRED_ACQ pid=5109 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:45.522294 sshd[5109]: Connection closed by 68.220.241.50 port 36508 Jan 16 21:12:45.521352 sshd-session[5105]: pam_unix(sshd:session): session closed for user core Jan 16 21:12:45.524000 audit[5105]: USER_END pid=5105 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:45.524000 audit[5105]: CRED_DISP pid=5105 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:45.529983 systemd[1]: sshd@19-137.184.190.135:22-68.220.241.50:36508.service: Deactivated successfully. Jan 16 21:12:45.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-137.184.190.135:22-68.220.241.50:36508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:45.535229 systemd[1]: session-19.scope: Deactivated successfully. Jan 16 21:12:45.539392 systemd-logind[1560]: Session 19 logged out. Waiting for processes to exit. Jan 16 21:12:45.541757 systemd-logind[1560]: Removed session 19. Jan 16 21:12:45.609602 systemd[1]: Started sshd@20-137.184.190.135:22-68.220.241.50:36518.service - OpenSSH per-connection server daemon (68.220.241.50:36518). Jan 16 21:12:45.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-137.184.190.135:22-68.220.241.50:36518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:46.018000 audit[5119]: USER_ACCT pid=5119 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:46.019895 sshd[5119]: Accepted publickey for core from 68.220.241.50 port 36518 ssh2: RSA SHA256:0TIfaCMFjZ+DZLKyAY8AqXCIwfcgirSh3KulVUQk9aI Jan 16 21:12:46.020000 audit[5119]: CRED_ACQ pid=5119 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:46.020000 audit[5119]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4173b840 a2=3 a3=0 items=0 ppid=1 pid=5119 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:46.020000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:12:46.023655 sshd-session[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:12:46.033173 systemd-logind[1560]: New session 20 of user core. Jan 16 21:12:46.040160 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 16 21:12:46.044000 audit[5119]: USER_START pid=5119 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:46.049000 audit[5125]: CRED_ACQ pid=5125 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:46.304956 sshd[5125]: Connection closed by 68.220.241.50 port 36518 Jan 16 21:12:46.304709 sshd-session[5119]: pam_unix(sshd:session): session closed for user core Jan 16 21:12:46.316104 kernel: kauditd_printk_skb: 54 callbacks suppressed Jan 16 21:12:46.316268 kernel: audit: type=1106 audit(1768597966.306:844): pid=5119 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:46.306000 audit[5119]: USER_END pid=5119 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:46.318769 kernel: audit: type=1104 audit(1768597966.306:845): pid=5119 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:46.306000 audit[5119]: CRED_DISP pid=5119 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:46.317062 systemd[1]: sshd@20-137.184.190.135:22-68.220.241.50:36518.service: Deactivated successfully. Jan 16 21:12:46.320503 systemd[1]: session-20.scope: Deactivated successfully. Jan 16 21:12:46.323601 systemd-logind[1560]: Session 20 logged out. Waiting for processes to exit. Jan 16 21:12:46.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-137.184.190.135:22-68.220.241.50:36518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:46.331834 kernel: audit: type=1131 audit(1768597966.316:846): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-137.184.190.135:22-68.220.241.50:36518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:46.332123 systemd-logind[1560]: Removed session 20. Jan 16 21:12:49.532785 kubelet[2777]: E0116 21:12:49.532509 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-88cb9dd67-rsm8g" podUID="98509e25-dadb-4a53-8355-8cb0c0d71e14" Jan 16 21:12:50.451000 audit[5137]: NETFILTER_CFG table=filter:139 family=2 entries=26 op=nft_register_rule pid=5137 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:12:50.451000 audit[5137]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff2d3e9f20 a2=0 a3=7fff2d3e9f0c items=0 ppid=2909 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:50.458367 kernel: audit: type=1325 audit(1768597970.451:847): table=filter:139 family=2 entries=26 op=nft_register_rule pid=5137 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:12:50.458485 kernel: audit: type=1300 audit(1768597970.451:847): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff2d3e9f20 a2=0 a3=7fff2d3e9f0c items=0 ppid=2909 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:50.451000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:12:50.466768 kernel: audit: type=1327 audit(1768597970.451:847): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:12:50.467000 audit[5137]: NETFILTER_CFG table=nat:140 family=2 entries=104 op=nft_register_chain pid=5137 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:12:50.472811 kernel: audit: type=1325 audit(1768597970.467:848): table=nat:140 family=2 entries=104 op=nft_register_chain pid=5137 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:12:50.473026 kernel: audit: type=1300 audit(1768597970.467:848): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fff2d3e9f20 a2=0 a3=7fff2d3e9f0c items=0 ppid=2909 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:50.467000 audit[5137]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fff2d3e9f20 a2=0 a3=7fff2d3e9f0c items=0 ppid=2909 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:50.467000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:12:50.483842 kernel: audit: type=1327 audit(1768597970.467:848): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:12:51.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-137.184.190.135:22-68.220.241.50:36526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:51.374860 systemd[1]: Started sshd@21-137.184.190.135:22-68.220.241.50:36526.service - OpenSSH per-connection server daemon (68.220.241.50:36526). Jan 16 21:12:51.381918 kernel: audit: type=1130 audit(1768597971.374:849): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-137.184.190.135:22-68.220.241.50:36526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:51.533230 kubelet[2777]: E0116 21:12:51.533179 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-88cb9dd67-54lnr" podUID="3c7766f8-3124-4eac-b0a1-e1f23a7c1e1f" Jan 16 21:12:51.733000 audit[5139]: USER_ACCT pid=5139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:51.736916 sshd[5139]: Accepted publickey for core from 68.220.241.50 port 36526 ssh2: RSA SHA256:0TIfaCMFjZ+DZLKyAY8AqXCIwfcgirSh3KulVUQk9aI Jan 16 21:12:51.739843 sshd-session[5139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:12:51.740778 kernel: audit: type=1101 audit(1768597971.733:850): pid=5139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:51.735000 audit[5139]: CRED_ACQ pid=5139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:51.747058 systemd-logind[1560]: New session 21 of user core. Jan 16 21:12:51.748791 kernel: audit: type=1103 audit(1768597971.735:851): pid=5139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:51.748882 kernel: audit: type=1006 audit(1768597971.735:852): pid=5139 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 16 21:12:51.735000 audit[5139]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcef84ada0 a2=3 a3=0 items=0 ppid=1 pid=5139 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:51.753395 kernel: audit: type=1300 audit(1768597971.735:852): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcef84ada0 a2=3 a3=0 items=0 ppid=1 pid=5139 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:51.735000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:12:51.757951 kernel: audit: type=1327 audit(1768597971.735:852): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:12:51.759099 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 16 21:12:51.764000 audit[5139]: USER_START pid=5139 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:51.770000 audit[5143]: CRED_ACQ pid=5143 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:51.773793 kernel: audit: type=1105 audit(1768597971.764:853): pid=5139 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:51.773868 kernel: audit: type=1103 audit(1768597971.770:854): pid=5143 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:52.019468 sshd[5143]: Connection closed by 68.220.241.50 port 36526 Jan 16 21:12:52.019105 sshd-session[5139]: pam_unix(sshd:session): session closed for user core Jan 16 21:12:52.021000 audit[5139]: USER_END pid=5139 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:52.027880 systemd[1]: sshd@21-137.184.190.135:22-68.220.241.50:36526.service: Deactivated successfully. Jan 16 21:12:52.021000 audit[5139]: CRED_DISP pid=5139 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:52.032343 kernel: audit: type=1106 audit(1768597972.021:855): pid=5139 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:52.032440 kernel: audit: type=1104 audit(1768597972.021:856): pid=5139 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:52.032519 systemd[1]: session-21.scope: Deactivated successfully. Jan 16 21:12:52.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-137.184.190.135:22-68.220.241.50:36526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:52.037709 systemd-logind[1560]: Session 21 logged out. Waiting for processes to exit. Jan 16 21:12:52.040060 systemd-logind[1560]: Removed session 21. Jan 16 21:12:52.533021 kubelet[2777]: E0116 21:12:52.532643 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67cf49786f-th49d" podUID="4a350016-b4b1-4c4d-a81e-6fa230a1b42f" Jan 16 21:12:53.535286 kubelet[2777]: E0116 21:12:53.535219 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8d65f9d5-lfhf5" podUID="23273946-e0fa-48ec-9cd8-e5c3fd7332d3" Jan 16 21:12:56.534573 kubelet[2777]: E0116 21:12:56.534374 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zcnrm" podUID="3ff9aa7f-1c05-4598-8de4-5a5a7bc4f529" Jan 16 21:12:56.535513 kubelet[2777]: E0116 21:12:56.535257 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fqdjm" podUID="948ce78a-6a96-42a6-a8f0-360a2ec834df" Jan 16 21:12:57.102815 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:12:57.102965 kernel: audit: type=1130 audit(1768597977.097:858): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-137.184.190.135:22-68.220.241.50:34670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:57.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-137.184.190.135:22-68.220.241.50:34670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:57.099256 systemd[1]: Started sshd@22-137.184.190.135:22-68.220.241.50:34670.service - OpenSSH per-connection server daemon (68.220.241.50:34670). Jan 16 21:12:57.465000 audit[5180]: USER_ACCT pid=5180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:57.467837 sshd[5180]: Accepted publickey for core from 68.220.241.50 port 34670 ssh2: RSA SHA256:0TIfaCMFjZ+DZLKyAY8AqXCIwfcgirSh3KulVUQk9aI Jan 16 21:12:57.473541 sshd-session[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:12:57.474773 kernel: audit: type=1101 audit(1768597977.465:859): pid=5180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:57.470000 audit[5180]: CRED_ACQ pid=5180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:57.481550 systemd-logind[1560]: New session 22 of user core. Jan 16 21:12:57.483038 kernel: audit: type=1103 audit(1768597977.470:860): pid=5180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:57.483132 kernel: audit: type=1006 audit(1768597977.470:861): pid=5180 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 16 21:12:57.470000 audit[5180]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe8ab1fd20 a2=3 a3=0 items=0 ppid=1 pid=5180 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:57.489890 kernel: audit: type=1300 audit(1768597977.470:861): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe8ab1fd20 a2=3 a3=0 items=0 ppid=1 pid=5180 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:12:57.489081 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 16 21:12:57.470000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:12:57.494000 audit[5180]: USER_START pid=5180 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:57.500220 kernel: audit: type=1327 audit(1768597977.470:861): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:12:57.500313 kernel: audit: type=1105 audit(1768597977.494:862): pid=5180 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:57.499000 audit[5184]: CRED_ACQ pid=5184 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:57.506550 kernel: audit: type=1103 audit(1768597977.499:863): pid=5184 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:57.755693 sshd[5184]: Connection closed by 68.220.241.50 port 34670 Jan 16 21:12:57.756577 sshd-session[5180]: pam_unix(sshd:session): session closed for user core Jan 16 21:12:57.758000 audit[5180]: USER_END pid=5180 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:57.766214 systemd[1]: sshd@22-137.184.190.135:22-68.220.241.50:34670.service: Deactivated successfully. Jan 16 21:12:57.758000 audit[5180]: CRED_DISP pid=5180 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:57.770535 systemd[1]: session-22.scope: Deactivated successfully. Jan 16 21:12:57.771575 kernel: audit: type=1106 audit(1768597977.758:864): pid=5180 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:57.771672 kernel: audit: type=1104 audit(1768597977.758:865): pid=5180 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:12:57.775002 systemd-logind[1560]: Session 22 logged out. Waiting for processes to exit. Jan 16 21:12:57.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-137.184.190.135:22-68.220.241.50:34670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:12:57.777124 systemd-logind[1560]: Removed session 22. Jan 16 21:13:01.532628 kubelet[2777]: E0116 21:13:01.532477 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-88cb9dd67-rsm8g" podUID="98509e25-dadb-4a53-8355-8cb0c0d71e14" Jan 16 21:13:02.849158 systemd[1]: Started sshd@23-137.184.190.135:22-68.220.241.50:49342.service - OpenSSH per-connection server daemon (68.220.241.50:49342). Jan 16 21:13:02.859113 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:13:02.859210 kernel: audit: type=1130 audit(1768597982.847:867): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-137.184.190.135:22-68.220.241.50:49342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:13:02.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-137.184.190.135:22-68.220.241.50:49342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:13:03.264000 audit[5197]: USER_ACCT pid=5197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:13:03.267068 sshd[5197]: Accepted publickey for core from 68.220.241.50 port 49342 ssh2: RSA SHA256:0TIfaCMFjZ+DZLKyAY8AqXCIwfcgirSh3KulVUQk9aI Jan 16 21:13:03.272848 kernel: audit: type=1101 audit(1768597983.264:868): pid=5197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:13:03.275203 sshd-session[5197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:13:03.272000 audit[5197]: CRED_ACQ pid=5197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:13:03.282620 kernel: audit: type=1103 audit(1768597983.272:869): pid=5197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:13:03.282910 kernel: audit: type=1006 audit(1768597983.272:870): pid=5197 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 16 21:13:03.284825 kernel: audit: type=1300 audit(1768597983.272:870): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6605ff40 a2=3 a3=0 items=0 ppid=1 pid=5197 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:13:03.272000 audit[5197]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6605ff40 a2=3 a3=0 items=0 ppid=1 pid=5197 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:13:03.272000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:13:03.294104 kernel: audit: type=1327 audit(1768597983.272:870): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:13:03.299006 systemd-logind[1560]: New session 23 of user core. Jan 16 21:13:03.302993 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 16 21:13:03.308000 audit[5197]: USER_START pid=5197 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:13:03.317767 kernel: audit: type=1105 audit(1768597983.308:871): pid=5197 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:13:03.316000 audit[5201]: CRED_ACQ pid=5201 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:13:03.324773 kernel: audit: type=1103 audit(1768597983.316:872): pid=5201 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:13:03.629017 sshd[5201]: Connection closed by 68.220.241.50 port 49342 Jan 16 21:13:03.632195 sshd-session[5197]: pam_unix(sshd:session): session closed for user core Jan 16 21:13:03.636000 audit[5197]: USER_END pid=5197 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:13:03.646797 kernel: audit: type=1106 audit(1768597983.636:873): pid=5197 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:13:03.647065 systemd[1]: sshd@23-137.184.190.135:22-68.220.241.50:49342.service: Deactivated successfully. Jan 16 21:13:03.651467 systemd[1]: session-23.scope: Deactivated successfully. Jan 16 21:13:03.656219 systemd-logind[1560]: Session 23 logged out. Waiting for processes to exit. Jan 16 21:13:03.636000 audit[5197]: CRED_DISP pid=5197 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:13:03.660999 systemd-logind[1560]: Removed session 23. Jan 16 21:13:03.665781 kernel: audit: type=1104 audit(1768597983.636:874): pid=5197 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 21:13:03.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-137.184.190.135:22-68.220.241.50:49342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:13:04.533782 kubelet[2777]: E0116 21:13:04.533583 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-88cb9dd67-54lnr" podUID="3c7766f8-3124-4eac-b0a1-e1f23a7c1e1f" Jan 16 21:13:05.530887 kubelet[2777]: E0116 21:13:05.530828 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 21:13:05.533440 kubelet[2777]: E0116 21:13:05.533325 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67cf49786f-th49d" podUID="4a350016-b4b1-4c4d-a81e-6fa230a1b42f"