Jan 23 18:26:20.178097 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 15:50:57 -00 2026 Jan 23 18:26:20.178123 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ee2a61adbfdca0d8850a6d1564f6a5daa8e67e4645be01ed76a79270fe7c1051 Jan 23 18:26:20.178137 kernel: BIOS-provided physical RAM map: Jan 23 18:26:20.178144 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 23 18:26:20.178150 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 23 18:26:20.178156 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 23 18:26:20.178163 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 23 18:26:20.178170 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 23 18:26:20.178214 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 23 18:26:20.178221 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 23 18:26:20.178233 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 18:26:20.178239 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 23 18:26:20.178246 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 18:26:20.178253 kernel: NX (Execute Disable) protection: active Jan 23 18:26:20.178260 kernel: APIC: Static calls initialized Jan 23 18:26:20.178270 kernel: SMBIOS 2.8 present. Jan 23 18:26:20.178313 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 23 18:26:20.178320 kernel: DMI: Memory slots populated: 1/1 Jan 23 18:26:20.178327 kernel: Hypervisor detected: KVM Jan 23 18:26:20.178334 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 23 18:26:20.178341 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 18:26:20.178347 kernel: kvm-clock: using sched offset of 11398931431 cycles Jan 23 18:26:20.178355 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 18:26:20.178363 kernel: tsc: Detected 2445.424 MHz processor Jan 23 18:26:20.178374 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 18:26:20.178447 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 18:26:20.178456 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 23 18:26:20.178464 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 23 18:26:20.178471 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 18:26:20.178478 kernel: Using GB pages for direct mapping Jan 23 18:26:20.178485 kernel: ACPI: Early table checksum verification disabled Jan 23 18:26:20.178497 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 23 18:26:20.178504 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:26:20.178511 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:26:20.178519 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:26:20.178526 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 23 18:26:20.178533 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:26:20.178540 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:26:20.178550 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:26:20.178558 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:26:20.178569 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 23 18:26:20.178576 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 23 18:26:20.178584 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 23 18:26:20.178592 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 23 18:26:20.178602 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 23 18:26:20.178610 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 23 18:26:20.178617 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 23 18:26:20.178624 kernel: No NUMA configuration found Jan 23 18:26:20.178632 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 23 18:26:20.178639 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 23 18:26:20.178649 kernel: Zone ranges: Jan 23 18:26:20.178657 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 18:26:20.178664 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 23 18:26:20.178671 kernel: Normal empty Jan 23 18:26:20.178679 kernel: Device empty Jan 23 18:26:20.178686 kernel: Movable zone start for each node Jan 23 18:26:20.178693 kernel: Early memory node ranges Jan 23 18:26:20.178701 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 23 18:26:20.178711 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 23 18:26:20.178718 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 23 18:26:20.178726 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 18:26:20.178734 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 23 18:26:20.178774 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 23 18:26:20.178782 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 18:26:20.178790 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 18:26:20.178801 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 18:26:20.178808 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 18:26:20.178846 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 18:26:20.178854 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 18:26:20.178861 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 18:26:20.178869 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 18:26:20.178876 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 18:26:20.178887 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 18:26:20.178895 kernel: TSC deadline timer available Jan 23 18:26:20.178902 kernel: CPU topo: Max. logical packages: 1 Jan 23 18:26:20.178910 kernel: CPU topo: Max. logical dies: 1 Jan 23 18:26:20.178918 kernel: CPU topo: Max. dies per package: 1 Jan 23 18:26:20.178975 kernel: CPU topo: Max. threads per core: 1 Jan 23 18:26:20.178989 kernel: CPU topo: Num. cores per package: 4 Jan 23 18:26:20.179003 kernel: CPU topo: Num. threads per package: 4 Jan 23 18:26:20.179018 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 23 18:26:20.179025 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 18:26:20.179033 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 18:26:20.179046 kernel: kvm-guest: setup PV sched yield Jan 23 18:26:20.179058 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 23 18:26:20.179070 kernel: Booting paravirtualized kernel on KVM Jan 23 18:26:20.179082 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 18:26:20.179100 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 23 18:26:20.179112 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 23 18:26:20.179125 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 23 18:26:20.179137 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 23 18:26:20.179150 kernel: kvm-guest: PV spinlocks enabled Jan 23 18:26:20.179158 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 18:26:20.179167 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ee2a61adbfdca0d8850a6d1564f6a5daa8e67e4645be01ed76a79270fe7c1051 Jan 23 18:26:20.179179 kernel: random: crng init done Jan 23 18:26:20.179186 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 18:26:20.179194 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 18:26:20.179201 kernel: Fallback order for Node 0: 0 Jan 23 18:26:20.179209 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 23 18:26:20.179217 kernel: Policy zone: DMA32 Jan 23 18:26:20.179225 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 18:26:20.179236 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 23 18:26:20.179246 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 18:26:20.179259 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 18:26:20.179271 kernel: Dynamic Preempt: voluntary Jan 23 18:26:20.179283 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 18:26:20.179303 kernel: rcu: RCU event tracing is enabled. Jan 23 18:26:20.179317 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 23 18:26:20.179335 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 18:26:20.179461 kernel: Rude variant of Tasks RCU enabled. Jan 23 18:26:20.179479 kernel: Tracing variant of Tasks RCU enabled. Jan 23 18:26:20.179492 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 18:26:20.179506 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 23 18:26:20.179519 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 18:26:20.179533 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 18:26:20.179552 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 18:26:20.179564 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 23 18:26:20.179577 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 18:26:20.179602 kernel: Console: colour VGA+ 80x25 Jan 23 18:26:20.179620 kernel: printk: legacy console [ttyS0] enabled Jan 23 18:26:20.179633 kernel: ACPI: Core revision 20240827 Jan 23 18:26:20.179646 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 23 18:26:20.179659 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 18:26:20.179671 kernel: x2apic enabled Jan 23 18:26:20.179686 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 18:26:20.179770 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 18:26:20.179785 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 18:26:20.179799 kernel: kvm-guest: setup PV IPIs Jan 23 18:26:20.179818 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 18:26:20.179831 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 23 18:26:20.179844 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 23 18:26:20.179852 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 18:26:20.179860 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 23 18:26:20.179868 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 23 18:26:20.179876 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 18:26:20.179887 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 18:26:20.179895 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 18:26:20.179903 kernel: Speculative Store Bypass: Vulnerable Jan 23 18:26:20.179911 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 23 18:26:20.179920 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 23 18:26:20.179991 kernel: active return thunk: srso_alias_return_thunk Jan 23 18:26:20.180000 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 23 18:26:20.180012 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 23 18:26:20.180020 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 18:26:20.180028 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 18:26:20.180036 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 18:26:20.180044 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 18:26:20.180056 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 18:26:20.180111 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 23 18:26:20.180128 kernel: Freeing SMP alternatives memory: 32K Jan 23 18:26:20.180141 kernel: pid_max: default: 32768 minimum: 301 Jan 23 18:26:20.180153 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 18:26:20.180166 kernel: landlock: Up and running. Jan 23 18:26:20.180178 kernel: SELinux: Initializing. Jan 23 18:26:20.180191 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:26:20.180204 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:26:20.180306 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 23 18:26:20.180321 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 23 18:26:20.180333 kernel: signal: max sigframe size: 1776 Jan 23 18:26:20.180346 kernel: rcu: Hierarchical SRCU implementation. Jan 23 18:26:20.180359 kernel: rcu: Max phase no-delay instances is 400. Jan 23 18:26:20.180372 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 18:26:20.180475 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 18:26:20.180494 kernel: smp: Bringing up secondary CPUs ... Jan 23 18:26:20.180507 kernel: smpboot: x86: Booting SMP configuration: Jan 23 18:26:20.180519 kernel: .... node #0, CPUs: #1 #2 #3 Jan 23 18:26:20.180532 kernel: smp: Brought up 1 node, 4 CPUs Jan 23 18:26:20.180544 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 23 18:26:20.180558 kernel: Memory: 2445292K/2571752K available (14336K kernel code, 2445K rwdata, 31636K rodata, 15532K init, 2508K bss, 120520K reserved, 0K cma-reserved) Jan 23 18:26:20.180570 kernel: devtmpfs: initialized Jan 23 18:26:20.180586 kernel: x86/mm: Memory block size: 128MB Jan 23 18:26:20.180599 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 18:26:20.180612 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 23 18:26:20.180624 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 18:26:20.180637 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 18:26:20.180649 kernel: audit: initializing netlink subsys (disabled) Jan 23 18:26:20.180662 kernel: audit: type=2000 audit(1769192769.935:1): state=initialized audit_enabled=0 res=1 Jan 23 18:26:20.180678 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 18:26:20.180690 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 18:26:20.180703 kernel: cpuidle: using governor menu Jan 23 18:26:20.180715 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 18:26:20.180728 kernel: dca service started, version 1.12.1 Jan 23 18:26:20.180740 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 23 18:26:20.180753 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 23 18:26:20.180769 kernel: PCI: Using configuration type 1 for base access Jan 23 18:26:20.180782 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 18:26:20.180794 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 18:26:20.180807 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 18:26:20.180819 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 18:26:20.180832 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 18:26:20.180844 kernel: ACPI: Added _OSI(Module Device) Jan 23 18:26:20.181056 kernel: ACPI: Added _OSI(Processor Device) Jan 23 18:26:20.181070 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 18:26:20.181083 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 18:26:20.181095 kernel: ACPI: Interpreter enabled Jan 23 18:26:20.181108 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 18:26:20.181120 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 18:26:20.181133 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 18:26:20.181146 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 18:26:20.181163 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 18:26:20.181176 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 18:26:20.181651 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 18:26:20.182016 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 18:26:20.182304 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 18:26:20.182328 kernel: PCI host bridge to bus 0000:00 Jan 23 18:26:20.182739 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 18:26:20.183074 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 18:26:20.183330 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 18:26:20.183676 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 23 18:26:20.184026 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 23 18:26:20.184310 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 23 18:26:20.184700 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 18:26:20.185087 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 18:26:20.185486 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 18:26:20.185851 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 23 18:26:20.186276 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 23 18:26:20.186669 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 23 18:26:20.187027 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 18:26:20.187331 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 23 18:26:20.187720 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 23 18:26:20.188073 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 23 18:26:20.188374 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 23 18:26:20.188743 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 23 18:26:20.189026 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 23 18:26:20.189259 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 23 18:26:20.189580 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 23 18:26:20.189853 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 18:26:20.190207 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 23 18:26:20.190569 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 23 18:26:20.190818 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 23 18:26:20.191122 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 23 18:26:20.191488 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 18:26:20.192100 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 18:26:20.192376 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 18:26:20.192699 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 23 18:26:20.193004 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 23 18:26:20.193256 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 18:26:20.193619 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 23 18:26:20.193649 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 18:26:20.193663 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 18:26:20.193677 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 18:26:20.193685 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 18:26:20.193693 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 18:26:20.193701 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 18:26:20.193709 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 18:26:20.193721 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 18:26:20.193728 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 18:26:20.193736 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 18:26:20.193744 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 18:26:20.193752 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 18:26:20.193760 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 18:26:20.193768 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 18:26:20.193778 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 18:26:20.193786 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 18:26:20.193794 kernel: iommu: Default domain type: Translated Jan 23 18:26:20.193802 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 18:26:20.193810 kernel: PCI: Using ACPI for IRQ routing Jan 23 18:26:20.193818 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 18:26:20.193826 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 23 18:26:20.193837 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 23 18:26:20.194124 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 18:26:20.194379 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 18:26:20.194721 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 18:26:20.194743 kernel: vgaarb: loaded Jan 23 18:26:20.194756 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 23 18:26:20.194772 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 23 18:26:20.194793 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 18:26:20.194806 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 18:26:20.194820 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 18:26:20.194834 kernel: pnp: PnP ACPI init Jan 23 18:26:20.195160 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 23 18:26:20.195176 kernel: pnp: PnP ACPI: found 6 devices Jan 23 18:26:20.195190 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 18:26:20.195198 kernel: NET: Registered PF_INET protocol family Jan 23 18:26:20.195206 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 18:26:20.195214 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 18:26:20.195222 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 18:26:20.195230 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 18:26:20.195238 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 18:26:20.195254 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 18:26:20.195269 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:26:20.195279 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:26:20.195287 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 18:26:20.195295 kernel: NET: Registered PF_XDP protocol family Jan 23 18:26:20.195605 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 18:26:20.196102 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 18:26:20.196310 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 18:26:20.196618 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 23 18:26:20.196879 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 23 18:26:20.197306 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 23 18:26:20.197325 kernel: PCI: CLS 0 bytes, default 64 Jan 23 18:26:20.197341 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 23 18:26:20.197509 kernel: Initialise system trusted keyrings Jan 23 18:26:20.197525 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 18:26:20.197533 kernel: Key type asymmetric registered Jan 23 18:26:20.197541 kernel: Asymmetric key parser 'x509' registered Jan 23 18:26:20.197549 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 18:26:20.197557 kernel: io scheduler mq-deadline registered Jan 23 18:26:20.197565 kernel: io scheduler kyber registered Jan 23 18:26:20.197573 kernel: io scheduler bfq registered Jan 23 18:26:20.197584 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 18:26:20.197593 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 18:26:20.197601 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 18:26:20.197609 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 18:26:20.197617 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 18:26:20.197625 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 18:26:20.197632 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 18:26:20.197643 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 18:26:20.197651 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 18:26:20.197659 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 18:26:20.197898 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 23 18:26:20.198250 kernel: rtc_cmos 00:04: registered as rtc0 Jan 23 18:26:20.198540 kernel: rtc_cmos 00:04: setting system clock to 2026-01-23T18:26:15 UTC (1769192775) Jan 23 18:26:20.198745 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 23 18:26:20.198762 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 23 18:26:20.198770 kernel: NET: Registered PF_INET6 protocol family Jan 23 18:26:20.198778 kernel: Segment Routing with IPv6 Jan 23 18:26:20.198786 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 18:26:20.198794 kernel: NET: Registered PF_PACKET protocol family Jan 23 18:26:20.198801 kernel: Key type dns_resolver registered Jan 23 18:26:20.198809 kernel: IPI shorthand broadcast: enabled Jan 23 18:26:20.198820 kernel: sched_clock: Marking stable (4720046058, 664527233)->(6113808683, -729235392) Jan 23 18:26:20.198828 kernel: registered taskstats version 1 Jan 23 18:26:20.198836 kernel: Loading compiled-in X.509 certificates Jan 23 18:26:20.198844 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed4528912f8413ae803010e63385bcf7ed197cf1' Jan 23 18:26:20.198852 kernel: Demotion targets for Node 0: null Jan 23 18:26:20.198860 kernel: Key type .fscrypt registered Jan 23 18:26:20.198868 kernel: Key type fscrypt-provisioning registered Jan 23 18:26:20.198879 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 18:26:20.198887 kernel: ima: Allocated hash algorithm: sha1 Jan 23 18:26:20.198895 kernel: ima: No architecture policies found Jan 23 18:26:20.198903 kernel: clk: Disabling unused clocks Jan 23 18:26:20.198911 kernel: Freeing unused kernel image (initmem) memory: 15532K Jan 23 18:26:20.198919 kernel: Write protecting the kernel read-only data: 47104k Jan 23 18:26:20.198981 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Jan 23 18:26:20.198992 kernel: Run /init as init process Jan 23 18:26:20.199000 kernel: with arguments: Jan 23 18:26:20.199008 kernel: /init Jan 23 18:26:20.199016 kernel: with environment: Jan 23 18:26:20.199024 kernel: HOME=/ Jan 23 18:26:20.199031 kernel: TERM=linux Jan 23 18:26:20.199039 kernel: SCSI subsystem initialized Jan 23 18:26:20.199050 kernel: libata version 3.00 loaded. Jan 23 18:26:20.199323 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 18:26:20.199338 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 18:26:20.199638 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 18:26:20.199872 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 18:26:20.200158 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 18:26:20.200597 kernel: scsi host0: ahci Jan 23 18:26:20.200829 kernel: scsi host1: ahci Jan 23 18:26:20.201109 kernel: scsi host2: ahci Jan 23 18:26:20.201329 kernel: scsi host3: ahci Jan 23 18:26:20.201726 kernel: scsi host4: ahci Jan 23 18:26:20.202081 kernel: scsi host5: ahci Jan 23 18:26:20.202097 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 23 18:26:20.202105 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 23 18:26:20.202114 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 23 18:26:20.202122 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 23 18:26:20.202130 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 23 18:26:20.202138 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 23 18:26:20.202152 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 18:26:20.202160 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 18:26:20.202168 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 18:26:20.202176 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 23 18:26:20.202185 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 18:26:20.202193 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 23 18:26:20.202201 kernel: ata3.00: applying bridge limits Jan 23 18:26:20.202212 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 18:26:20.202220 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 18:26:20.202228 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 18:26:20.202236 kernel: ata3.00: configured for UDMA/100 Jan 23 18:26:20.202615 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 23 18:26:20.202850 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 23 18:26:20.203194 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 23 18:26:20.203210 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 18:26:20.203540 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 23 18:26:20.203560 kernel: GPT:16515071 != 27000831 Jan 23 18:26:20.203569 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 18:26:20.203577 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 18:26:20.203585 kernel: GPT:16515071 != 27000831 Jan 23 18:26:20.203603 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 18:26:20.203618 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 18:26:20.203875 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 23 18:26:20.203889 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 18:26:20.203897 kernel: device-mapper: uevent: version 1.0.3 Jan 23 18:26:20.203906 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 18:26:20.203915 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 23 18:26:20.203987 kernel: raid6: avx2x4 gen() 30443 MB/s Jan 23 18:26:20.203996 kernel: raid6: avx2x2 gen() 26005 MB/s Jan 23 18:26:20.204004 kernel: raid6: avx2x1 gen() 20752 MB/s Jan 23 18:26:20.204012 kernel: raid6: using algorithm avx2x4 gen() 30443 MB/s Jan 23 18:26:20.204020 kernel: raid6: .... xor() 4188 MB/s, rmw enabled Jan 23 18:26:20.204029 kernel: raid6: using avx2x2 recovery algorithm Jan 23 18:26:20.204037 kernel: xor: automatically using best checksumming function avx Jan 23 18:26:20.204049 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 18:26:20.204058 kernel: BTRFS: device fsid ae5f9861-c401-42b4-99c9-2e3fe0b343c2 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (182) Jan 23 18:26:20.204069 kernel: BTRFS info (device dm-0): first mount of filesystem ae5f9861-c401-42b4-99c9-2e3fe0b343c2 Jan 23 18:26:20.204078 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:26:20.204089 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 18:26:20.204097 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 18:26:20.204106 kernel: loop: module loaded Jan 23 18:26:20.204114 kernel: loop0: detected capacity change from 0 to 100560 Jan 23 18:26:20.204127 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 18:26:20.204143 systemd[1]: Successfully made /usr/ read-only. Jan 23 18:26:20.204162 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:26:20.204179 systemd[1]: Detected virtualization kvm. Jan 23 18:26:20.204194 systemd[1]: Detected architecture x86-64. Jan 23 18:26:20.204207 systemd[1]: Running in initrd. Jan 23 18:26:20.204222 systemd[1]: No hostname configured, using default hostname. Jan 23 18:26:20.204236 systemd[1]: Hostname set to . Jan 23 18:26:20.204251 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 23 18:26:20.204272 systemd[1]: Queued start job for default target initrd.target. Jan 23 18:26:20.204285 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 18:26:20.204301 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:26:20.204315 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:26:20.204330 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 18:26:20.204344 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:26:20.204367 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 18:26:20.204489 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 18:26:20.204510 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:26:20.204525 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:26:20.204539 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:26:20.204553 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:26:20.204576 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:26:20.204590 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:26:20.204605 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:26:20.204619 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:26:20.204631 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:26:20.204639 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 18:26:20.204648 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 18:26:20.204660 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 18:26:20.204669 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:26:20.204677 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:26:20.204686 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:26:20.204695 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:26:20.204703 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 18:26:20.204712 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 18:26:20.204724 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:26:20.204732 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 18:26:20.204741 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 18:26:20.204750 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 18:26:20.204758 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:26:20.204767 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:26:20.204779 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:26:20.204787 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 18:26:20.204796 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:26:20.204895 systemd-journald[323]: Collecting audit messages is enabled. Jan 23 18:26:20.204985 kernel: audit: type=1130 audit(1769192780.179:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:20.204995 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 18:26:20.205005 kernel: audit: type=1130 audit(1769192780.199:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:20.205018 systemd-journald[323]: Journal started Jan 23 18:26:20.205035 systemd-journald[323]: Runtime Journal (/run/log/journal/a2adc2b661064070bc3cc60831d92c81) is 6M, max 48.2M, 42.1M free. Jan 23 18:26:20.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:20.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:20.274625 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 18:26:20.287488 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:26:20.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:20.304536 kernel: audit: type=1130 audit(1769192780.292:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:20.327621 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:26:20.334364 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 18:26:20.358513 kernel: Bridge firewalling registered Jan 23 18:26:20.360704 systemd-modules-load[324]: Inserted module 'br_netfilter' Jan 23 18:26:20.387629 kernel: audit: type=1130 audit(1769192780.369:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:20.387689 kernel: audit: type=1130 audit(1769192780.387:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:20.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:20.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:20.369775 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:26:20.387322 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:26:20.390819 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:26:20.407591 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:26:20.428023 systemd-tmpfiles[335]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 18:26:20.827001 kernel: hrtimer: interrupt took 4982353 ns Jan 23 18:26:20.831109 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:26:20.866379 kernel: audit: type=1130 audit(1769192780.840:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:20.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:20.868022 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:26:20.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:20.901242 kernel: audit: type=1130 audit(1769192780.867:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:20.902873 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:26:20.943149 kernel: audit: type=1130 audit(1769192780.902:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:20.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:20.934721 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 18:26:20.963768 kernel: audit: type=1334 audit(1769192780.953:10): prog-id=6 op=LOAD Jan 23 18:26:20.953000 audit: BPF prog-id=6 op=LOAD Jan 23 18:26:20.959892 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:26:20.992826 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:26:20.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:21.008556 kernel: audit: type=1130 audit(1769192780.992:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:21.028895 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:26:21.037911 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 18:26:21.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:21.090545 systemd-resolved[348]: Positive Trust Anchors: Jan 23 18:26:21.090612 systemd-resolved[348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:26:21.090619 systemd-resolved[348]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 18:26:21.090666 systemd-resolved[348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:26:21.174002 dracut-cmdline[361]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ee2a61adbfdca0d8850a6d1564f6a5daa8e67e4645be01ed76a79270fe7c1051 Jan 23 18:26:21.204565 systemd-resolved[348]: Defaulting to hostname 'linux'. Jan 23 18:26:21.208309 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:26:21.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:21.223354 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:26:21.538562 kernel: Loading iSCSI transport class v2.0-870. Jan 23 18:26:21.607710 kernel: iscsi: registered transport (tcp) Jan 23 18:26:21.659176 kernel: iscsi: registered transport (qla4xxx) Jan 23 18:26:21.659309 kernel: QLogic iSCSI HBA Driver Jan 23 18:26:21.736604 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:26:21.777814 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:26:21.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:21.779831 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:26:22.014079 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 18:26:22.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:22.019033 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 18:26:22.029226 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 18:26:22.126055 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:26:22.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:22.138000 audit: BPF prog-id=7 op=LOAD Jan 23 18:26:22.138000 audit: BPF prog-id=8 op=LOAD Jan 23 18:26:22.140010 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:26:22.260153 systemd-udevd[588]: Using default interface naming scheme 'v257'. Jan 23 18:26:22.295548 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:26:22.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:22.318883 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 18:26:22.391096 dracut-pre-trigger[668]: rd.md=0: removing MD RAID activation Jan 23 18:26:22.415584 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:26:22.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:22.430000 audit: BPF prog-id=9 op=LOAD Jan 23 18:26:22.431778 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:26:22.489187 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:26:22.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:22.494483 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:26:22.545913 systemd-networkd[718]: lo: Link UP Jan 23 18:26:22.545980 systemd-networkd[718]: lo: Gained carrier Jan 23 18:26:22.557688 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:26:22.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:22.558174 systemd[1]: Reached target network.target - Network. Jan 23 18:26:22.745349 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:26:22.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:22.846782 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 18:26:22.985872 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 18:26:23.034255 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 18:26:23.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:23.058545 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 18:26:23.072091 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 18:26:23.100754 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 18:26:23.133921 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 18:26:23.134012 kernel: AES CTR mode by8 optimization enabled Jan 23 18:26:23.143304 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 18:26:23.230554 systemd-networkd[718]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:26:23.231635 systemd-networkd[718]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:26:23.234915 systemd-networkd[718]: eth0: Link UP Jan 23 18:26:23.236133 systemd-networkd[718]: eth0: Gained carrier Jan 23 18:26:23.236145 systemd-networkd[718]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:26:23.251166 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:26:23.258675 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:26:23.265051 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:26:23.274656 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 18:26:23.285650 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 18:26:23.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:23.292742 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:26:23.292807 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:26:23.318512 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:26:23.350752 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:26:23.357551 systemd-networkd[718]: eth0: DHCPv4 address 10.0.0.29/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 18:26:23.388174 disk-uuid[847]: Primary Header is updated. Jan 23 18:26:23.388174 disk-uuid[847]: Secondary Entries is updated. Jan 23 18:26:23.388174 disk-uuid[847]: Secondary Header is updated. Jan 23 18:26:23.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:23.393718 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:26:23.868032 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:26:23.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:24.576556 disk-uuid[852]: Warning: The kernel is still using the old partition table. Jan 23 18:26:24.576556 disk-uuid[852]: The new table will be used at the next reboot or after you Jan 23 18:26:24.576556 disk-uuid[852]: run partprobe(8) or kpartx(8) Jan 23 18:26:24.576556 disk-uuid[852]: The operation has completed successfully. Jan 23 18:26:24.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:24.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:24.593334 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 18:26:24.593611 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 18:26:24.610178 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 18:26:24.686507 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (866) Jan 23 18:26:24.698095 kernel: BTRFS info (device vda6): first mount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:26:24.698147 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:26:24.713515 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:26:24.713570 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:26:24.733604 kernel: BTRFS info (device vda6): last unmount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:26:24.737617 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 18:26:24.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:24.751909 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 18:26:25.140755 systemd-networkd[718]: eth0: Gained IPv6LL Jan 23 18:26:25.309196 ignition[885]: Ignition 2.24.0 Jan 23 18:26:25.310265 ignition[885]: Stage: fetch-offline Jan 23 18:26:25.310746 ignition[885]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:26:25.310773 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:26:25.311082 ignition[885]: parsed url from cmdline: "" Jan 23 18:26:25.311089 ignition[885]: no config URL provided Jan 23 18:26:25.319326 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:26:25.319362 ignition[885]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:26:25.319547 ignition[885]: op(1): [started] loading QEMU firmware config module Jan 23 18:26:25.319558 ignition[885]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 23 18:26:25.342778 ignition[885]: op(1): [finished] loading QEMU firmware config module Jan 23 18:26:25.769520 ignition[885]: parsing config with SHA512: e41251e5ddaab8f61458c6dbde0b2b833c19e966e364d66a7f01f8a485111beac68367c251bbcae399e214ca2f1295f7ba257729c71044da9f5fd321dec9e321 Jan 23 18:26:25.789344 unknown[885]: fetched base config from "system" Jan 23 18:26:25.789474 unknown[885]: fetched user config from "qemu" Jan 23 18:26:25.799816 ignition[885]: fetch-offline: fetch-offline passed Jan 23 18:26:25.803992 ignition[885]: Ignition finished successfully Jan 23 18:26:25.816684 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:26:25.843198 kernel: kauditd_printk_skb: 20 callbacks suppressed Jan 23 18:26:25.843229 kernel: audit: type=1130 audit(1769192785.821:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:25.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:25.823203 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 23 18:26:25.825127 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 18:26:25.936552 ignition[895]: Ignition 2.24.0 Jan 23 18:26:25.936602 ignition[895]: Stage: kargs Jan 23 18:26:25.937099 ignition[895]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:26:25.937112 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:26:25.940713 ignition[895]: kargs: kargs passed Jan 23 18:26:25.969882 ignition[895]: Ignition finished successfully Jan 23 18:26:25.978042 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 18:26:26.001025 kernel: audit: type=1130 audit(1769192785.977:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:25.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:25.981486 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 18:26:26.172707 ignition[902]: Ignition 2.24.0 Jan 23 18:26:26.172762 ignition[902]: Stage: disks Jan 23 18:26:26.172908 ignition[902]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:26:26.172919 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:26:26.173786 ignition[902]: disks: disks passed Jan 23 18:26:26.173835 ignition[902]: Ignition finished successfully Jan 23 18:26:26.202899 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 18:26:26.226176 kernel: audit: type=1130 audit(1769192786.208:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:26.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:26.209640 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 18:26:26.232995 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 18:26:26.245067 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:26:26.245244 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:26:26.256122 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:26:26.279234 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 18:26:26.376365 systemd-fsck[912]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 23 18:26:26.383699 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 18:26:26.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:26.397324 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 18:26:26.420038 kernel: audit: type=1130 audit(1769192786.394:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:26.641566 kernel: EXT4-fs (vda9): mounted filesystem eebf2bdd-2461-4b18-9f37-721daf86511d r/w with ordered data mode. Quota mode: none. Jan 23 18:26:26.643366 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 18:26:26.645988 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 18:26:26.668517 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:26:26.676672 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 18:26:26.681102 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 18:26:26.681169 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 18:26:26.681213 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:26:26.723894 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 18:26:26.737689 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (920) Jan 23 18:26:26.732611 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 18:26:26.757212 kernel: BTRFS info (device vda6): first mount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:26:26.757238 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:26:26.764508 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:26:26.764535 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:26:26.766629 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:26:27.061271 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 18:26:27.088106 kernel: audit: type=1130 audit(1769192787.066:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:27.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:27.068652 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 18:26:27.097816 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 18:26:27.120721 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 18:26:27.130164 kernel: BTRFS info (device vda6): last unmount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:26:27.162792 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 18:26:27.182997 kernel: audit: type=1130 audit(1769192787.162:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:27.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:27.193149 ignition[1018]: INFO : Ignition 2.24.0 Jan 23 18:26:27.193149 ignition[1018]: INFO : Stage: mount Jan 23 18:26:27.201881 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:26:27.201881 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:26:27.211107 ignition[1018]: INFO : mount: mount passed Jan 23 18:26:27.211107 ignition[1018]: INFO : Ignition finished successfully Jan 23 18:26:27.219640 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 18:26:27.237369 kernel: audit: type=1130 audit(1769192787.223:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:27.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:27.225816 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 18:26:27.645163 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:26:27.702668 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1029) Jan 23 18:26:27.702730 kernel: BTRFS info (device vda6): first mount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:26:27.702748 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:26:27.722804 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:26:27.723040 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:26:27.725264 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:26:27.796726 ignition[1046]: INFO : Ignition 2.24.0 Jan 23 18:26:27.796726 ignition[1046]: INFO : Stage: files Jan 23 18:26:27.807025 ignition[1046]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:26:27.807025 ignition[1046]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:26:27.822215 ignition[1046]: DEBUG : files: compiled without relabeling support, skipping Jan 23 18:26:27.822215 ignition[1046]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 18:26:27.822215 ignition[1046]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 18:26:27.822215 ignition[1046]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 18:26:27.822215 ignition[1046]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 18:26:27.862741 ignition[1046]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 18:26:27.862741 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 18:26:27.862741 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 18:26:27.822685 unknown[1046]: wrote ssh authorized keys file for user: core Jan 23 18:26:27.957014 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 18:26:28.124778 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 18:26:28.124778 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 18:26:28.124778 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 18:26:28.124778 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:26:28.162036 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:26:28.162036 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:26:28.162036 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:26:28.162036 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:26:28.162036 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:26:28.162036 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:26:28.162036 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:26:28.162036 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:26:28.162036 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:26:28.162036 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:26:28.162036 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 23 18:26:28.621545 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 18:26:30.880788 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:26:30.880788 ignition[1046]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 18:26:30.901812 ignition[1046]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:26:30.931864 ignition[1046]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:26:30.931864 ignition[1046]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 18:26:30.931864 ignition[1046]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 23 18:26:30.931864 ignition[1046]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 18:26:30.969187 ignition[1046]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 18:26:30.969187 ignition[1046]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 23 18:26:30.969187 ignition[1046]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 23 18:26:31.032042 ignition[1046]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 18:26:31.053098 ignition[1046]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 18:26:31.062591 ignition[1046]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 23 18:26:31.062591 ignition[1046]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 23 18:26:31.062591 ignition[1046]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 18:26:31.062591 ignition[1046]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:26:31.062591 ignition[1046]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:26:31.062591 ignition[1046]: INFO : files: files passed Jan 23 18:26:31.062591 ignition[1046]: INFO : Ignition finished successfully Jan 23 18:26:31.114605 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 18:26:31.158104 kernel: audit: type=1130 audit(1769192791.141:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:31.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:31.159730 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 18:26:31.182167 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 18:26:31.256715 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 18:26:31.261605 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 18:26:31.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:31.274196 initrd-setup-root-after-ignition[1077]: grep: /sysroot/oem/oem-release: No such file or directory Jan 23 18:26:31.295816 kernel: audit: type=1130 audit(1769192791.271:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:31.295856 kernel: audit: type=1131 audit(1769192791.271:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:31.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:31.294931 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:26:31.326587 kernel: audit: type=1130 audit(1769192791.307:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:31.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:31.326664 initrd-setup-root-after-ignition[1079]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:26:31.326664 initrd-setup-root-after-ignition[1079]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:26:31.309261 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 18:26:31.357335 initrd-setup-root-after-ignition[1083]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:26:31.334706 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 18:26:31.455836 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 18:26:31.456169 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 18:26:31.655778 kernel: audit: type=1130 audit(1769192791.502:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:31.655824 kernel: audit: type=1131 audit(1769192791.502:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:31.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:31.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:31.503541 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 18:26:31.661058 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 18:26:31.666857 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 18:26:31.669756 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 18:26:31.755816 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:26:31.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:31.770804 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 18:26:31.792599 kernel: audit: type=1130 audit(1769192791.765:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:31.825368 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 18:26:31.825706 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:26:31.831911 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:26:31.842690 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 18:26:31.862104 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 18:26:31.862596 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:26:31.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:31.923875 kernel: audit: type=1131 audit(1769192791.898:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:31.926670 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 18:26:31.932322 systemd[1]: Stopped target basic.target - Basic System. Jan 23 18:26:31.943298 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 18:26:31.958198 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:26:31.962626 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 18:26:31.984823 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:26:31.996140 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 18:26:32.001902 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:26:32.014348 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 18:26:32.030036 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 18:26:32.035791 systemd[1]: Stopped target swap.target - Swaps. Jan 23 18:26:32.049919 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 18:26:32.050288 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:26:32.073760 kernel: audit: type=1131 audit(1769192792.054:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.074066 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:26:32.079576 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:26:32.085015 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 18:26:32.085560 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:26:32.129822 kernel: audit: type=1131 audit(1769192792.112:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.095299 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 18:26:32.095710 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 18:26:32.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.130038 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 18:26:32.130234 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:26:32.182288 systemd[1]: Stopped target paths.target - Path Units. Jan 23 18:26:32.191911 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 18:26:32.195697 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:26:32.196124 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 18:26:32.207768 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 18:26:32.222851 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 18:26:32.223072 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:26:32.241035 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 18:26:32.241255 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:26:32.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.245735 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 23 18:26:32.245916 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 23 18:26:32.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.261904 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 18:26:32.262265 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:26:32.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.266611 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 18:26:32.266792 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 18:26:32.285571 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 18:26:32.288700 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 18:26:32.288867 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:26:32.336719 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 18:26:32.340517 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 18:26:32.340726 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:26:32.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.362379 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 18:26:32.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.362798 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:26:32.379676 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 18:26:32.379859 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:26:32.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.415696 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 18:26:32.416008 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 18:26:32.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.440570 ignition[1103]: INFO : Ignition 2.24.0 Jan 23 18:26:32.440570 ignition[1103]: INFO : Stage: umount Jan 23 18:26:32.440570 ignition[1103]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:26:32.440570 ignition[1103]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:26:32.440570 ignition[1103]: INFO : umount: umount passed Jan 23 18:26:32.440570 ignition[1103]: INFO : Ignition finished successfully Jan 23 18:26:32.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.433045 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 18:26:32.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.433763 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 18:26:32.433874 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 18:26:32.449709 systemd[1]: Stopped target network.target - Network. Jan 23 18:26:32.453600 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 18:26:32.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.453699 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 18:26:32.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.479226 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 18:26:32.479309 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 18:26:32.488673 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 18:26:32.488746 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 18:26:32.497931 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 18:26:32.498091 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 18:26:32.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.522780 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 18:26:32.533737 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 18:26:32.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.539237 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 18:26:32.539568 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 18:26:32.560216 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 18:26:32.560354 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 18:26:32.596085 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 18:26:32.657000 audit: BPF prog-id=6 op=UNLOAD Jan 23 18:26:32.596300 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 18:26:32.624497 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 18:26:32.624688 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 18:26:32.683000 audit: BPF prog-id=9 op=UNLOAD Jan 23 18:26:32.658070 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 18:26:32.659881 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 18:26:32.660000 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:26:32.706244 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 18:26:32.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.711557 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 18:26:32.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.711644 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:26:32.718730 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 18:26:32.718789 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:26:32.719006 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 18:26:32.719061 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 18:26:32.733510 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:26:32.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.764612 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 18:26:32.764995 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 18:26:32.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.780683 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 18:26:32.780942 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:26:32.784734 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 18:26:32.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.784801 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 18:26:32.799918 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 18:26:32.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.800020 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:26:32.804336 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 18:26:32.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.804478 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:26:32.823664 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 18:26:32.823748 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 18:26:32.838244 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 18:26:32.838321 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:26:32.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.854278 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 18:26:32.857855 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 18:26:32.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.857935 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:26:32.867726 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 18:26:32.867785 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:26:32.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:32.893857 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:26:32.893926 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:26:32.910258 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 18:26:32.910526 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 18:26:32.922713 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 18:26:32.928323 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 18:26:32.977891 systemd[1]: Switching root. Jan 23 18:26:33.031829 systemd-journald[323]: Journal stopped Jan 23 18:26:35.565754 systemd-journald[323]: Received SIGTERM from PID 1 (systemd). Jan 23 18:26:35.565864 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 18:26:35.565881 kernel: SELinux: policy capability open_perms=1 Jan 23 18:26:35.565899 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 18:26:35.565917 kernel: SELinux: policy capability always_check_network=0 Jan 23 18:26:35.565933 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 18:26:35.565952 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 18:26:35.566036 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 18:26:35.566055 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 18:26:35.566176 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 18:26:35.566199 systemd[1]: Successfully loaded SELinux policy in 120.950ms. Jan 23 18:26:35.566227 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 20.938ms. Jan 23 18:26:35.566248 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:26:35.566270 systemd[1]: Detected virtualization kvm. Jan 23 18:26:35.566298 systemd[1]: Detected architecture x86-64. Jan 23 18:26:35.566326 systemd[1]: Detected first boot. Jan 23 18:26:35.566345 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 23 18:26:35.566369 zram_generator::config[1147]: No configuration found. Jan 23 18:26:35.566521 kernel: Guest personality initialized and is inactive Jan 23 18:26:35.566544 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 18:26:35.566568 kernel: Initialized host personality Jan 23 18:26:35.566591 kernel: NET: Registered PF_VSOCK protocol family Jan 23 18:26:35.566614 systemd[1]: Populated /etc with preset unit settings. Jan 23 18:26:35.566635 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 18:26:35.566657 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 18:26:35.566677 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 18:26:35.566710 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 18:26:35.566734 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 18:26:35.566762 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 18:26:35.566783 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 18:26:35.566879 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 18:26:35.566902 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 18:26:35.566925 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 18:26:35.566946 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 18:26:35.567041 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:26:35.567073 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:26:35.567093 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 18:26:35.567117 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 18:26:35.567133 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 18:26:35.567148 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:26:35.567162 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 18:26:35.567179 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:26:35.567193 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:26:35.567205 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 18:26:35.567218 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 18:26:35.567231 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 18:26:35.567244 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 18:26:35.567256 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:26:35.567272 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:26:35.567285 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 23 18:26:35.567298 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:26:35.567464 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:26:35.567482 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 18:26:35.567494 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 18:26:35.567507 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 18:26:35.567524 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 18:26:35.567537 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 23 18:26:35.567549 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:26:35.567563 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 23 18:26:35.567576 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 23 18:26:35.567588 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:26:35.567601 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:26:35.567616 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 18:26:35.567628 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 18:26:35.567641 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 18:26:35.567653 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 18:26:35.567666 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:26:35.567679 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 18:26:35.567691 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 18:26:35.567707 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 18:26:35.567720 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 18:26:35.567732 systemd[1]: Reached target machines.target - Containers. Jan 23 18:26:35.567799 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 18:26:35.567813 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:26:35.567827 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:26:35.567839 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 18:26:35.567856 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:26:35.567869 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:26:35.567882 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:26:35.567895 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 18:26:35.567908 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:26:35.567922 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 18:26:35.567935 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 18:26:35.567951 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 18:26:35.568023 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 18:26:35.568037 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 18:26:35.568051 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:26:35.568063 kernel: ACPI: bus type drm_connector registered Jan 23 18:26:35.568079 kernel: fuse: init (API version 7.41) Jan 23 18:26:35.568092 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:26:35.568105 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:26:35.568118 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:26:35.568130 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 18:26:35.568248 systemd-journald[1233]: Collecting audit messages is enabled. Jan 23 18:26:35.568280 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 18:26:35.568295 systemd-journald[1233]: Journal started Jan 23 18:26:35.568317 systemd-journald[1233]: Runtime Journal (/run/log/journal/a2adc2b661064070bc3cc60831d92c81) is 6M, max 48.2M, 42.1M free. Jan 23 18:26:34.860000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 23 18:26:35.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:35.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:35.468000 audit: BPF prog-id=14 op=UNLOAD Jan 23 18:26:35.468000 audit: BPF prog-id=13 op=UNLOAD Jan 23 18:26:35.483000 audit: BPF prog-id=15 op=LOAD Jan 23 18:26:35.483000 audit: BPF prog-id=16 op=LOAD Jan 23 18:26:35.484000 audit: BPF prog-id=17 op=LOAD Jan 23 18:26:35.561000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 23 18:26:35.561000 audit[1233]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffd73af5250 a2=4000 a3=0 items=0 ppid=1 pid=1233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:26:35.561000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 23 18:26:34.469215 systemd[1]: Queued start job for default target multi-user.target. Jan 23 18:26:34.492884 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 18:26:34.494053 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 18:26:34.495327 systemd[1]: systemd-journald.service: Consumed 1.502s CPU time. Jan 23 18:26:35.585507 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:26:35.604572 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:26:35.618365 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:26:35.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:35.620532 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 18:26:35.626520 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 18:26:35.632782 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 18:26:35.638607 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 18:26:35.644768 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 18:26:35.650885 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 18:26:35.656885 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 18:26:35.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:35.664025 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:26:35.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:35.671707 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 18:26:35.672274 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 18:26:35.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:35.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:35.679936 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:26:35.680611 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:26:35.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:35.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:35.688232 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:26:35.688862 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:26:35.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:35.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:35.694732 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:26:35.695264 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:26:35.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:35.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:35.702884 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 18:26:35.703809 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 18:26:35.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:35.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:35.710940 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:26:35.711827 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:26:35.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:35.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:35.717897 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:26:35.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:35.725829 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:26:35.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:35.737863 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 18:26:35.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:35.744927 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 18:26:35.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:35.752805 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:26:35.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:35.778046 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:26:35.785538 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 23 18:26:35.795808 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 18:26:35.804245 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 18:26:35.810555 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 18:26:35.810677 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:26:35.818177 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 18:26:35.827296 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:26:35.827592 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 18:26:35.830152 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 18:26:35.839093 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 18:26:35.844143 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:26:35.846489 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 18:26:35.852239 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:26:35.856707 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:26:35.902340 systemd-journald[1233]: Time spent on flushing to /var/log/journal/a2adc2b661064070bc3cc60831d92c81 is 39.668ms for 1100 entries. Jan 23 18:26:35.902340 systemd-journald[1233]: System Journal (/var/log/journal/a2adc2b661064070bc3cc60831d92c81) is 8M, max 163.5M, 155.5M free. Jan 23 18:26:36.007614 systemd-journald[1233]: Received client request to flush runtime journal. Jan 23 18:26:36.007671 kernel: loop1: detected capacity change from 0 to 111560 Jan 23 18:26:35.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:35.901271 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 18:26:35.913028 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 18:26:35.928570 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 18:26:35.934297 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 18:26:35.949059 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 18:26:35.957642 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 18:26:35.989256 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 18:26:36.013215 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:26:36.021742 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 18:26:36.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:36.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:36.047700 kernel: loop2: detected capacity change from 0 to 229808 Jan 23 18:26:36.057231 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 18:26:36.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:36.072661 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 18:26:36.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:36.079000 audit: BPF prog-id=18 op=LOAD Jan 23 18:26:36.080000 audit: BPF prog-id=19 op=LOAD Jan 23 18:26:36.080000 audit: BPF prog-id=20 op=LOAD Jan 23 18:26:36.082088 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 23 18:26:36.088000 audit: BPF prog-id=21 op=LOAD Jan 23 18:26:36.090080 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:26:36.096614 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:26:36.109000 audit: BPF prog-id=22 op=LOAD Jan 23 18:26:36.109000 audit: BPF prog-id=23 op=LOAD Jan 23 18:26:36.110000 audit: BPF prog-id=24 op=LOAD Jan 23 18:26:36.112222 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 23 18:26:36.114682 kernel: loop3: detected capacity change from 0 to 50784 Jan 23 18:26:36.122000 audit: BPF prog-id=25 op=LOAD Jan 23 18:26:36.122000 audit: BPF prog-id=26 op=LOAD Jan 23 18:26:36.122000 audit: BPF prog-id=27 op=LOAD Jan 23 18:26:36.126661 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 18:26:36.185528 kernel: loop4: detected capacity change from 0 to 111560 Jan 23 18:26:36.213598 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Jan 23 18:26:36.213623 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Jan 23 18:26:36.261289 kernel: loop5: detected capacity change from 0 to 229808 Jan 23 18:26:36.276810 systemd-nsresourced[1290]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 23 18:26:36.282595 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 23 18:26:36.312552 kernel: kauditd_printk_skb: 95 callbacks suppressed Jan 23 18:26:36.312643 kernel: audit: type=1130 audit(1769192796.290:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:36.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:36.322328 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:26:36.334481 kernel: loop6: detected capacity change from 0 to 50784 Jan 23 18:26:36.334563 kernel: audit: type=1130 audit(1769192796.334:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:36.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:36.367307 (sd-merge)[1294]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 23 18:26:36.391093 (sd-merge)[1294]: Merged extensions into '/usr'. Jan 23 18:26:36.398048 systemd[1]: Reload requested from client PID 1268 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 18:26:36.398108 systemd[1]: Reloading... Jan 23 18:26:36.555521 zram_generator::config[1346]: No configuration found. Jan 23 18:26:36.631746 systemd-oomd[1287]: No swap; memory pressure usage will be degraded Jan 23 18:26:36.649870 systemd-resolved[1288]: Positive Trust Anchors: Jan 23 18:26:36.650377 systemd-resolved[1288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:26:36.650549 systemd-resolved[1288]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 18:26:36.650596 systemd-resolved[1288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:26:36.657267 systemd-resolved[1288]: Defaulting to hostname 'linux'. Jan 23 18:26:36.932501 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 18:26:36.943531 systemd[1]: Reloading finished in 544 ms. Jan 23 18:26:37.011579 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 18:26:37.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:37.019267 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 23 18:26:37.040831 kernel: audit: type=1130 audit(1769192797.017:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:37.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:37.070618 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:26:37.094344 kernel: audit: type=1130 audit(1769192797.068:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:37.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:37.133911 kernel: audit: type=1130 audit(1769192797.112:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:37.135777 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 18:26:37.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:37.154309 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 18:26:37.219222 kernel: audit: type=1130 audit(1769192797.151:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:37.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:37.265901 kernel: audit: type=1130 audit(1769192797.239:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:37.282868 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:26:37.357161 systemd[1]: Starting ensure-sysext.service... Jan 23 18:26:37.370847 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:26:37.380000 audit: BPF prog-id=8 op=UNLOAD Jan 23 18:26:37.388738 kernel: audit: type=1334 audit(1769192797.380:149): prog-id=8 op=UNLOAD Jan 23 18:26:37.380000 audit: BPF prog-id=7 op=UNLOAD Jan 23 18:26:37.398638 kernel: audit: type=1334 audit(1769192797.380:150): prog-id=7 op=UNLOAD Jan 23 18:26:37.398680 kernel: audit: type=1334 audit(1769192797.390:151): prog-id=28 op=LOAD Jan 23 18:26:37.390000 audit: BPF prog-id=28 op=LOAD Jan 23 18:26:37.397000 audit: BPF prog-id=29 op=LOAD Jan 23 18:26:37.400131 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:26:37.439000 audit: BPF prog-id=30 op=LOAD Jan 23 18:26:37.439000 audit: BPF prog-id=25 op=UNLOAD Jan 23 18:26:37.440000 audit: BPF prog-id=31 op=LOAD Jan 23 18:26:37.440000 audit: BPF prog-id=32 op=LOAD Jan 23 18:26:37.440000 audit: BPF prog-id=26 op=UNLOAD Jan 23 18:26:37.440000 audit: BPF prog-id=27 op=UNLOAD Jan 23 18:26:37.441000 audit: BPF prog-id=33 op=LOAD Jan 23 18:26:37.460000 audit: BPF prog-id=15 op=UNLOAD Jan 23 18:26:37.461000 audit: BPF prog-id=34 op=LOAD Jan 23 18:26:37.461000 audit: BPF prog-id=35 op=LOAD Jan 23 18:26:37.461000 audit: BPF prog-id=16 op=UNLOAD Jan 23 18:26:37.461000 audit: BPF prog-id=17 op=UNLOAD Jan 23 18:26:37.463000 audit: BPF prog-id=36 op=LOAD Jan 23 18:26:37.464000 audit: BPF prog-id=22 op=UNLOAD Jan 23 18:26:37.464000 audit: BPF prog-id=37 op=LOAD Jan 23 18:26:37.464000 audit: BPF prog-id=38 op=LOAD Jan 23 18:26:37.464000 audit: BPF prog-id=23 op=UNLOAD Jan 23 18:26:37.464000 audit: BPF prog-id=24 op=UNLOAD Jan 23 18:26:37.465000 audit: BPF prog-id=39 op=LOAD Jan 23 18:26:37.465000 audit: BPF prog-id=21 op=UNLOAD Jan 23 18:26:37.467000 audit: BPF prog-id=40 op=LOAD Jan 23 18:26:37.467000 audit: BPF prog-id=18 op=UNLOAD Jan 23 18:26:37.468000 audit: BPF prog-id=41 op=LOAD Jan 23 18:26:37.468000 audit: BPF prog-id=42 op=LOAD Jan 23 18:26:37.468000 audit: BPF prog-id=19 op=UNLOAD Jan 23 18:26:37.468000 audit: BPF prog-id=20 op=UNLOAD Jan 23 18:26:37.575374 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 18:26:37.575568 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 18:26:37.575916 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 18:26:37.577895 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Jan 23 18:26:37.578101 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Jan 23 18:26:37.578379 systemd[1]: Reload requested from client PID 1374 ('systemctl') (unit ensure-sysext.service)... Jan 23 18:26:37.578568 systemd[1]: Reloading... Jan 23 18:26:37.586626 systemd-udevd[1376]: Using default interface naming scheme 'v257'. Jan 23 18:26:37.593831 systemd-tmpfiles[1375]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:26:37.593842 systemd-tmpfiles[1375]: Skipping /boot Jan 23 18:26:37.650309 systemd-tmpfiles[1375]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:26:37.650476 systemd-tmpfiles[1375]: Skipping /boot Jan 23 18:26:38.366541 zram_generator::config[1421]: No configuration found. Jan 23 18:26:39.488591 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 23 18:26:39.515065 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 18:26:39.539714 kernel: ACPI: button: Power Button [PWRF] Jan 23 18:26:39.568473 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 18:26:39.575228 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 18:26:40.374370 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 18:26:41.805963 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 2247535510 wd_nsec: 2247534657 Jan 23 18:26:41.816810 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 18:26:41.820588 systemd[1]: Reloading finished in 4241 ms. Jan 23 18:26:41.859492 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:26:41.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:41.875654 kernel: kauditd_printk_skb: 27 callbacks suppressed Jan 23 18:26:41.875818 kernel: audit: type=1130 audit(1769192801.869:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:41.903000 audit: BPF prog-id=43 op=LOAD Jan 23 18:26:41.922722 kernel: audit: type=1334 audit(1769192801.903:180): prog-id=43 op=LOAD Jan 23 18:26:41.903000 audit: BPF prog-id=44 op=LOAD Jan 23 18:26:41.978172 kernel: audit: type=1334 audit(1769192801.903:181): prog-id=44 op=LOAD Jan 23 18:26:41.979905 kernel: audit: type=1334 audit(1769192801.903:182): prog-id=45 op=LOAD Jan 23 18:26:41.903000 audit: BPF prog-id=45 op=LOAD Jan 23 18:26:41.905000 audit: BPF prog-id=46 op=LOAD Jan 23 18:26:41.996112 kernel: audit: type=1334 audit(1769192801.905:183): prog-id=46 op=LOAD Jan 23 18:26:41.996837 kernel: audit: type=1334 audit(1769192801.905:184): prog-id=47 op=LOAD Jan 23 18:26:41.905000 audit: BPF prog-id=47 op=LOAD Jan 23 18:26:41.905000 audit: BPF prog-id=48 op=LOAD Jan 23 18:26:42.038373 kernel: audit: type=1334 audit(1769192801.905:185): prog-id=48 op=LOAD Jan 23 18:26:42.038852 kernel: audit: type=1334 audit(1769192801.924:186): prog-id=49 op=LOAD Jan 23 18:26:42.039082 kernel: audit: type=1334 audit(1769192801.924:187): prog-id=50 op=LOAD Jan 23 18:26:42.039197 kernel: audit: type=1334 audit(1769192801.925:188): prog-id=51 op=LOAD Jan 23 18:26:41.924000 audit: BPF prog-id=49 op=LOAD Jan 23 18:26:41.924000 audit: BPF prog-id=50 op=LOAD Jan 23 18:26:41.925000 audit: BPF prog-id=51 op=LOAD Jan 23 18:26:41.928000 audit: BPF prog-id=52 op=LOAD Jan 23 18:26:41.931000 audit: BPF prog-id=53 op=LOAD Jan 23 18:26:41.932000 audit: BPF prog-id=54 op=LOAD Jan 23 18:26:41.932000 audit: BPF prog-id=55 op=LOAD Jan 23 18:26:41.934000 audit: BPF prog-id=33 op=UNLOAD Jan 23 18:26:41.934000 audit: BPF prog-id=34 op=UNLOAD Jan 23 18:26:41.934000 audit: BPF prog-id=35 op=UNLOAD Jan 23 18:26:41.934000 audit: BPF prog-id=39 op=UNLOAD Jan 23 18:26:41.934000 audit: BPF prog-id=40 op=UNLOAD Jan 23 18:26:41.934000 audit: BPF prog-id=41 op=UNLOAD Jan 23 18:26:41.934000 audit: BPF prog-id=42 op=UNLOAD Jan 23 18:26:41.934000 audit: BPF prog-id=30 op=UNLOAD Jan 23 18:26:41.934000 audit: BPF prog-id=31 op=UNLOAD Jan 23 18:26:41.934000 audit: BPF prog-id=32 op=UNLOAD Jan 23 18:26:41.934000 audit: BPF prog-id=36 op=UNLOAD Jan 23 18:26:41.934000 audit: BPF prog-id=37 op=UNLOAD Jan 23 18:26:41.934000 audit: BPF prog-id=38 op=UNLOAD Jan 23 18:26:41.934000 audit: BPF prog-id=56 op=LOAD Jan 23 18:26:41.934000 audit: BPF prog-id=57 op=LOAD Jan 23 18:26:41.934000 audit: BPF prog-id=28 op=UNLOAD Jan 23 18:26:41.935000 audit: BPF prog-id=29 op=UNLOAD Jan 23 18:26:42.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:42.004321 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:26:42.261628 systemd[1]: Finished ensure-sysext.service. Jan 23 18:26:42.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:42.297524 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:26:42.302286 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:26:42.343036 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 18:26:42.367731 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:26:42.375964 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:26:42.388085 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:26:42.400899 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:26:42.415808 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:26:42.436846 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:26:42.437177 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 18:26:42.448965 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 18:26:42.460582 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 18:26:42.469353 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:26:42.568882 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 18:26:42.579000 audit: BPF prog-id=58 op=LOAD Jan 23 18:26:42.586503 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:26:42.607000 audit: BPF prog-id=59 op=LOAD Jan 23 18:26:42.616142 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 18:26:42.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:42.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:42.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:42.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:42.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:42.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:42.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:42.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:42.639785 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 18:26:42.655584 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:26:42.655786 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:26:42.658862 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:26:42.666956 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:26:42.668573 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:26:42.669131 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:26:42.671624 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:26:42.672121 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:26:42.675538 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:26:42.675859 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:26:42.702568 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:26:42.702700 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:26:42.744000 audit[1522]: SYSTEM_BOOT pid=1522 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 23 18:26:42.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:42.803064 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 18:26:42.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:42.859040 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 18:26:42.917811 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 18:26:42.923586 kernel: kvm_amd: TSC scaling supported Jan 23 18:26:42.923690 kernel: kvm_amd: Nested Virtualization enabled Jan 23 18:26:42.923729 kernel: kvm_amd: Nested Paging enabled Jan 23 18:26:42.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:42.933193 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 23 18:26:42.933268 kernel: kvm_amd: PMU virtualization is disabled Jan 23 18:26:42.988899 augenrules[1541]: No rules Jan 23 18:26:42.987000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 23 18:26:42.987000 audit[1541]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffed9309790 a2=420 a3=0 items=0 ppid=1495 pid=1541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:26:42.987000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 18:26:43.022669 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:26:43.026167 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:26:43.039785 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 18:26:43.042217 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 18:26:43.191946 systemd-networkd[1517]: lo: Link UP Jan 23 18:26:43.192052 systemd-networkd[1517]: lo: Gained carrier Jan 23 18:26:43.195493 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:26:43.198174 systemd-networkd[1517]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:26:43.198236 systemd-networkd[1517]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:26:43.202693 systemd-networkd[1517]: eth0: Link UP Jan 23 18:26:43.204186 systemd-networkd[1517]: eth0: Gained carrier Jan 23 18:26:43.204257 systemd-networkd[1517]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:26:43.231558 systemd-networkd[1517]: eth0: DHCPv4 address 10.0.0.29/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 18:26:43.276535 kernel: EDAC MC: Ver: 3.0.0 Jan 23 18:26:43.329750 systemd-timesyncd[1521]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 23 18:26:43.329871 systemd-timesyncd[1521]: Initial clock synchronization to Fri 2026-01-23 18:26:43.462202 UTC. Jan 23 18:26:43.691958 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 18:26:43.709119 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:26:43.724525 systemd[1]: Reached target network.target - Network. Jan 23 18:26:43.730023 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 18:26:43.738539 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 18:26:43.750177 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 18:26:43.836121 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 18:26:44.607903 systemd-networkd[1517]: eth0: Gained IPv6LL Jan 23 18:26:44.655389 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 18:26:44.662732 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 18:26:44.984223 ldconfig[1507]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 18:26:44.995553 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 18:26:45.008519 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 18:26:45.080194 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 18:26:45.087017 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:26:45.092737 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 18:26:45.100070 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 18:26:45.107727 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 18:26:45.117155 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 18:26:45.125600 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 18:26:45.136827 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 23 18:26:45.148636 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 23 18:26:45.154346 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 18:26:45.170389 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 18:26:45.170905 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:26:45.175718 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:26:45.185381 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 18:26:45.195381 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 18:26:45.209370 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 18:26:45.216824 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 18:26:45.223027 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 18:26:45.262149 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 18:26:45.270831 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 18:26:45.281552 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 18:26:45.289711 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:26:45.295920 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:26:45.303228 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:26:45.303312 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:26:45.347624 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 18:26:45.355769 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 23 18:26:45.388962 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 18:26:45.399522 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 18:26:45.402537 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 18:26:45.426761 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 18:26:45.436298 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 18:26:45.441100 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 18:26:45.449923 jq[1566]: false Jan 23 18:26:45.459302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:26:45.468803 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 18:26:45.477694 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Refreshing passwd entry cache Jan 23 18:26:45.473616 oslogin_cache_refresh[1568]: Refreshing passwd entry cache Jan 23 18:26:45.480859 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 18:26:45.487694 extend-filesystems[1567]: Found /dev/vda6 Jan 23 18:26:45.493137 extend-filesystems[1567]: Found /dev/vda9 Jan 23 18:26:45.490001 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 18:26:45.498687 extend-filesystems[1567]: Checking size of /dev/vda9 Jan 23 18:26:45.499076 oslogin_cache_refresh[1568]: Failure getting users, quitting Jan 23 18:26:45.509693 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 18:26:45.512848 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Failure getting users, quitting Jan 23 18:26:45.512848 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:26:45.512848 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Refreshing group entry cache Jan 23 18:26:45.499102 oslogin_cache_refresh[1568]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:26:45.499216 oslogin_cache_refresh[1568]: Refreshing group entry cache Jan 23 18:26:45.529765 extend-filesystems[1567]: Resized partition /dev/vda9 Jan 23 18:26:45.539607 extend-filesystems[1584]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 18:26:45.545892 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Failure getting groups, quitting Jan 23 18:26:45.545892 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:26:45.537646 oslogin_cache_refresh[1568]: Failure getting groups, quitting Jan 23 18:26:45.541765 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 18:26:45.537669 oslogin_cache_refresh[1568]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:26:45.554551 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 23 18:26:45.569650 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 18:26:45.576728 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 18:26:45.579298 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 18:26:45.581089 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 18:26:45.588625 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 18:26:45.605341 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 18:26:45.628581 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 23 18:26:45.614859 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 18:26:45.615512 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 18:26:45.616131 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 18:26:45.616806 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 18:26:45.634850 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 18:26:46.098305 jq[1589]: true Jan 23 18:26:46.052298 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 18:26:46.080862 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 18:26:46.090123 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 18:26:46.101531 extend-filesystems[1584]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 18:26:46.101531 extend-filesystems[1584]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 23 18:26:46.101531 extend-filesystems[1584]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 23 18:26:46.157071 extend-filesystems[1567]: Resized filesystem in /dev/vda9 Jan 23 18:26:46.162574 update_engine[1588]: I20260123 18:26:46.131690 1588 main.cc:92] Flatcar Update Engine starting Jan 23 18:26:46.121270 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 18:26:46.121754 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 18:26:46.189115 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 18:26:46.194856 jq[1618]: true Jan 23 18:26:46.211848 tar[1598]: linux-amd64/LICENSE Jan 23 18:26:46.211848 tar[1598]: linux-amd64/helm Jan 23 18:26:46.239343 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 23 18:26:46.240015 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 23 18:26:46.247856 systemd-logind[1586]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 18:26:46.270317 dbus-daemon[1564]: [system] SELinux support is enabled Jan 23 18:26:46.247911 systemd-logind[1586]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 18:26:46.265753 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 18:26:46.266663 systemd-logind[1586]: New seat seat0. Jan 23 18:26:46.268891 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 18:26:46.278159 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 18:26:46.304552 update_engine[1588]: I20260123 18:26:46.299639 1588 update_check_scheduler.cc:74] Next update check in 4m29s Jan 23 18:26:46.303645 dbus-daemon[1564]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 18:26:46.301064 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 18:26:46.301115 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 18:26:46.312621 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 18:26:46.312664 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 18:26:46.323969 systemd[1]: Started update-engine.service - Update Engine. Jan 23 18:26:46.341307 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 18:26:46.405510 bash[1651]: Updated "/home/core/.ssh/authorized_keys" Jan 23 18:26:46.409316 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 18:26:46.421537 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 18:26:46.491512 sshd_keygen[1616]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 18:26:46.752956 locksmithd[1652]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 18:26:46.803537 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 18:26:46.821108 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 18:26:46.895724 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 18:26:46.896644 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 18:26:46.918581 containerd[1632]: time="2026-01-23T18:26:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 18:26:46.919706 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 18:26:46.927259 containerd[1632]: time="2026-01-23T18:26:46.927143090Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 23 18:26:46.963985 containerd[1632]: time="2026-01-23T18:26:46.963169612Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.761µs" Jan 23 18:26:46.967353 containerd[1632]: time="2026-01-23T18:26:46.965725430Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 18:26:46.967353 containerd[1632]: time="2026-01-23T18:26:46.965909435Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 18:26:46.967353 containerd[1632]: time="2026-01-23T18:26:46.965935297Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 18:26:46.968539 containerd[1632]: time="2026-01-23T18:26:46.968000998Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 18:26:46.968539 containerd[1632]: time="2026-01-23T18:26:46.968035220Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:26:46.968539 containerd[1632]: time="2026-01-23T18:26:46.968132945Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:26:46.968539 containerd[1632]: time="2026-01-23T18:26:46.968150122Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:26:46.969561 containerd[1632]: time="2026-01-23T18:26:46.969288896Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:26:46.969561 containerd[1632]: time="2026-01-23T18:26:46.969329154Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:26:46.969561 containerd[1632]: time="2026-01-23T18:26:46.969347935Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:26:46.969561 containerd[1632]: time="2026-01-23T18:26:46.969362250Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 18:26:46.971467 containerd[1632]: time="2026-01-23T18:26:46.971282591Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 18:26:46.971467 containerd[1632]: time="2026-01-23T18:26:46.971314175Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 18:26:46.972513 containerd[1632]: time="2026-01-23T18:26:46.972246135Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 18:26:46.987698 containerd[1632]: time="2026-01-23T18:26:46.986145329Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:26:46.987698 containerd[1632]: time="2026-01-23T18:26:46.987039567Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:26:46.987698 containerd[1632]: time="2026-01-23T18:26:46.987074955Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 18:26:46.992372 containerd[1632]: time="2026-01-23T18:26:46.989735469Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 18:26:46.994774 containerd[1632]: time="2026-01-23T18:26:46.994582155Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 18:26:46.994774 containerd[1632]: time="2026-01-23T18:26:46.994744246Z" level=info msg="metadata content store policy set" policy=shared Jan 23 18:26:46.995614 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 18:26:47.010981 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 18:26:47.017586 containerd[1632]: time="2026-01-23T18:26:47.015345707Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 18:26:47.017586 containerd[1632]: time="2026-01-23T18:26:47.015597478Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 18:26:47.017586 containerd[1632]: time="2026-01-23T18:26:47.015792662Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 18:26:47.017586 containerd[1632]: time="2026-01-23T18:26:47.015817815Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 18:26:47.017586 containerd[1632]: time="2026-01-23T18:26:47.015838389Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 18:26:47.017586 containerd[1632]: time="2026-01-23T18:26:47.015855093Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 18:26:47.017586 containerd[1632]: time="2026-01-23T18:26:47.015877704Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 18:26:47.017586 containerd[1632]: time="2026-01-23T18:26:47.015893466Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 18:26:47.017586 containerd[1632]: time="2026-01-23T18:26:47.015911801Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 18:26:47.017586 containerd[1632]: time="2026-01-23T18:26:47.015927998Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 18:26:47.017586 containerd[1632]: time="2026-01-23T18:26:47.015952736Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 18:26:47.017586 containerd[1632]: time="2026-01-23T18:26:47.015968661Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 18:26:47.017586 containerd[1632]: time="2026-01-23T18:26:47.015983977Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 18:26:47.017586 containerd[1632]: time="2026-01-23T18:26:47.016013657Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 18:26:47.018038 containerd[1632]: time="2026-01-23T18:26:47.016220471Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 18:26:47.018038 containerd[1632]: time="2026-01-23T18:26:47.016253304Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 18:26:47.018038 containerd[1632]: time="2026-01-23T18:26:47.016360195Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 18:26:47.020535 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 18:26:47.021702 containerd[1632]: time="2026-01-23T18:26:47.020797543Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 18:26:47.021702 containerd[1632]: time="2026-01-23T18:26:47.020830060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 18:26:47.021702 containerd[1632]: time="2026-01-23T18:26:47.020845346Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 18:26:47.021702 containerd[1632]: time="2026-01-23T18:26:47.020864988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 18:26:47.021702 containerd[1632]: time="2026-01-23T18:26:47.020883597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 18:26:47.021702 containerd[1632]: time="2026-01-23T18:26:47.020912691Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 18:26:47.021702 containerd[1632]: time="2026-01-23T18:26:47.020956441Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 18:26:47.021702 containerd[1632]: time="2026-01-23T18:26:47.020982273Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 18:26:47.021702 containerd[1632]: time="2026-01-23T18:26:47.021166953Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 18:26:47.021702 containerd[1632]: time="2026-01-23T18:26:47.021243100Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 18:26:47.021702 containerd[1632]: time="2026-01-23T18:26:47.021264859Z" level=info msg="Start snapshots syncer" Jan 23 18:26:47.022597 containerd[1632]: time="2026-01-23T18:26:47.022566704Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 18:26:47.024543 containerd[1632]: time="2026-01-23T18:26:47.022999062Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 18:26:47.024543 containerd[1632]: time="2026-01-23T18:26:47.023078005Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 18:26:47.032033 containerd[1632]: time="2026-01-23T18:26:47.023160564Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 18:26:47.032033 containerd[1632]: time="2026-01-23T18:26:47.029841460Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 18:26:47.032033 containerd[1632]: time="2026-01-23T18:26:47.029895331Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 18:26:47.032033 containerd[1632]: time="2026-01-23T18:26:47.029917080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 18:26:47.032033 containerd[1632]: time="2026-01-23T18:26:47.029934058Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 18:26:47.032033 containerd[1632]: time="2026-01-23T18:26:47.029954875Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 18:26:47.032033 containerd[1632]: time="2026-01-23T18:26:47.029970566Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 18:26:47.032033 containerd[1632]: time="2026-01-23T18:26:47.029986318Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 18:26:47.032033 containerd[1632]: time="2026-01-23T18:26:47.030002961Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 18:26:47.032033 containerd[1632]: time="2026-01-23T18:26:47.030017671Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 18:26:47.027729 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 18:26:47.035626 containerd[1632]: time="2026-01-23T18:26:47.032672242Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:26:47.035626 containerd[1632]: time="2026-01-23T18:26:47.032810831Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:26:47.041297 containerd[1632]: time="2026-01-23T18:26:47.040106926Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:26:47.041297 containerd[1632]: time="2026-01-23T18:26:47.040251450Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:26:47.041297 containerd[1632]: time="2026-01-23T18:26:47.040274435Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 18:26:47.041297 containerd[1632]: time="2026-01-23T18:26:47.040293004Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 18:26:47.041297 containerd[1632]: time="2026-01-23T18:26:47.041140286Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 18:26:47.041297 containerd[1632]: time="2026-01-23T18:26:47.041155986Z" level=info msg="runtime interface created" Jan 23 18:26:47.041297 containerd[1632]: time="2026-01-23T18:26:47.041162652Z" level=info msg="created NRI interface" Jan 23 18:26:47.041297 containerd[1632]: time="2026-01-23T18:26:47.041175041Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 18:26:47.041297 containerd[1632]: time="2026-01-23T18:26:47.041194045Z" level=info msg="Connect containerd service" Jan 23 18:26:47.041297 containerd[1632]: time="2026-01-23T18:26:47.041243763Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 18:26:47.079827 containerd[1632]: time="2026-01-23T18:26:47.078597050Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 18:26:48.669151 tar[1598]: linux-amd64/README.md Jan 23 18:26:48.736681 containerd[1632]: time="2026-01-23T18:26:48.736621337Z" level=info msg="Start subscribing containerd event" Jan 23 18:26:48.737646 containerd[1632]: time="2026-01-23T18:26:48.737593007Z" level=info msg="Start recovering state" Jan 23 18:26:48.737970 containerd[1632]: time="2026-01-23T18:26:48.737941273Z" level=info msg="Start event monitor" Jan 23 18:26:48.738146 containerd[1632]: time="2026-01-23T18:26:48.738121369Z" level=info msg="Start cni network conf syncer for default" Jan 23 18:26:48.738236 containerd[1632]: time="2026-01-23T18:26:48.738216571Z" level=info msg="Start streaming server" Jan 23 18:26:48.738671 containerd[1632]: time="2026-01-23T18:26:48.738639515Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 18:26:48.738827 containerd[1632]: time="2026-01-23T18:26:48.738804083Z" level=info msg="runtime interface starting up..." Jan 23 18:26:48.738907 containerd[1632]: time="2026-01-23T18:26:48.738889796Z" level=info msg="starting plugins..." Jan 23 18:26:48.738998 containerd[1632]: time="2026-01-23T18:26:48.738978959Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 18:26:48.740043 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 18:26:48.743966 containerd[1632]: time="2026-01-23T18:26:48.740238290Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 18:26:48.743966 containerd[1632]: time="2026-01-23T18:26:48.740330356Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 18:26:48.743966 containerd[1632]: time="2026-01-23T18:26:48.743548875Z" level=info msg="containerd successfully booted in 1.835460s" Jan 23 18:26:48.826676 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 18:26:51.678709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:26:51.687338 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 18:26:51.695019 systemd[1]: Startup finished in 7.889s (kernel) + 14.378s (initrd) + 18.562s (userspace) = 40.830s. Jan 23 18:26:51.695283 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:26:53.545126 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 18:26:53.549103 systemd[1]: Started sshd@0-10.0.0.29:22-10.0.0.1:58074.service - OpenSSH per-connection server daemon (10.0.0.1:58074). Jan 23 18:26:54.108108 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 58074 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:26:54.117594 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:26:54.137855 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 18:26:54.140074 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 18:26:54.148361 systemd-logind[1586]: New session 1 of user core. Jan 23 18:26:54.150854 kubelet[1706]: E0123 18:26:54.150747 1706 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:26:54.157843 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:26:54.158144 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:26:54.159056 systemd[1]: kubelet.service: Consumed 6.229s CPU time, 267.9M memory peak. Jan 23 18:26:54.211316 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 18:26:54.222150 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 18:26:54.278054 (systemd)[1725]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:26:54.284304 systemd-logind[1586]: New session 2 of user core. Jan 23 18:26:54.760938 systemd[1725]: Queued start job for default target default.target. Jan 23 18:26:54.785173 systemd[1725]: Created slice app.slice - User Application Slice. Jan 23 18:26:54.785298 systemd[1725]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 23 18:26:54.785314 systemd[1725]: Reached target paths.target - Paths. Jan 23 18:26:54.785654 systemd[1725]: Reached target timers.target - Timers. Jan 23 18:26:54.788617 systemd[1725]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 18:26:54.790159 systemd[1725]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 23 18:26:55.043940 systemd[1725]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 23 18:26:55.062244 systemd[1725]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 18:26:55.062935 systemd[1725]: Reached target sockets.target - Sockets. Jan 23 18:26:55.063147 systemd[1725]: Reached target basic.target - Basic System. Jan 23 18:26:55.063304 systemd[1725]: Reached target default.target - Main User Target. Jan 23 18:26:55.063566 systemd[1725]: Startup finished in 744ms. Jan 23 18:26:55.063723 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 18:26:55.081125 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 18:26:55.135804 systemd[1]: Started sshd@1-10.0.0.29:22-10.0.0.1:58082.service - OpenSSH per-connection server daemon (10.0.0.1:58082). Jan 23 18:26:55.268709 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 58082 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:26:55.272593 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:26:55.285022 systemd-logind[1586]: New session 3 of user core. Jan 23 18:26:55.299836 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 18:26:55.332233 sshd[1743]: Connection closed by 10.0.0.1 port 58082 Jan 23 18:26:55.333216 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jan 23 18:26:55.343821 systemd[1]: sshd@1-10.0.0.29:22-10.0.0.1:58082.service: Deactivated successfully. Jan 23 18:26:55.346303 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 18:26:55.348016 systemd-logind[1586]: Session 3 logged out. Waiting for processes to exit. Jan 23 18:26:55.352654 systemd[1]: Started sshd@2-10.0.0.29:22-10.0.0.1:58088.service - OpenSSH per-connection server daemon (10.0.0.1:58088). Jan 23 18:26:55.355215 systemd-logind[1586]: Removed session 3. Jan 23 18:26:55.443604 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 58088 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:26:55.446349 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:26:55.456360 systemd-logind[1586]: New session 4 of user core. Jan 23 18:26:55.467737 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 18:26:55.484614 sshd[1753]: Connection closed by 10.0.0.1 port 58088 Jan 23 18:26:55.485821 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Jan 23 18:26:55.496364 systemd[1]: sshd@2-10.0.0.29:22-10.0.0.1:58088.service: Deactivated successfully. Jan 23 18:26:55.498939 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 18:26:55.500715 systemd-logind[1586]: Session 4 logged out. Waiting for processes to exit. Jan 23 18:26:55.504115 systemd[1]: Started sshd@3-10.0.0.29:22-10.0.0.1:58098.service - OpenSSH per-connection server daemon (10.0.0.1:58098). Jan 23 18:26:55.505173 systemd-logind[1586]: Removed session 4. Jan 23 18:26:55.586081 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 58098 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:26:55.588568 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:26:55.627766 systemd-logind[1586]: New session 5 of user core. Jan 23 18:26:55.637820 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 18:26:55.669705 sshd[1764]: Connection closed by 10.0.0.1 port 58098 Jan 23 18:26:55.670863 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Jan 23 18:26:55.683112 systemd[1]: sshd@3-10.0.0.29:22-10.0.0.1:58098.service: Deactivated successfully. Jan 23 18:26:55.686079 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 18:26:55.688012 systemd-logind[1586]: Session 5 logged out. Waiting for processes to exit. Jan 23 18:26:55.694279 systemd[1]: Started sshd@4-10.0.0.29:22-10.0.0.1:58114.service - OpenSSH per-connection server daemon (10.0.0.1:58114). Jan 23 18:26:55.695301 systemd-logind[1586]: Removed session 5. Jan 23 18:26:55.778201 sshd[1770]: Accepted publickey for core from 10.0.0.1 port 58114 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:26:55.780769 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:26:55.788890 systemd-logind[1586]: New session 6 of user core. Jan 23 18:26:55.798755 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 18:26:55.836936 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 18:26:55.837612 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:26:55.854599 sudo[1776]: pam_unix(sudo:session): session closed for user root Jan 23 18:26:55.856998 sshd[1775]: Connection closed by 10.0.0.1 port 58114 Jan 23 18:26:55.857532 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Jan 23 18:26:55.869905 systemd[1]: sshd@4-10.0.0.29:22-10.0.0.1:58114.service: Deactivated successfully. Jan 23 18:26:55.874058 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 18:26:55.875962 systemd-logind[1586]: Session 6 logged out. Waiting for processes to exit. Jan 23 18:26:55.881379 systemd[1]: Started sshd@5-10.0.0.29:22-10.0.0.1:58128.service - OpenSSH per-connection server daemon (10.0.0.1:58128). Jan 23 18:26:55.882316 systemd-logind[1586]: Removed session 6. Jan 23 18:26:55.969732 sshd[1783]: Accepted publickey for core from 10.0.0.1 port 58128 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:26:55.971884 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:26:55.981356 systemd-logind[1586]: New session 7 of user core. Jan 23 18:26:55.991707 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 18:26:56.020783 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 18:26:56.021363 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:26:56.028179 sudo[1789]: pam_unix(sudo:session): session closed for user root Jan 23 18:26:56.047627 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 18:26:56.048521 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:26:56.063224 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:26:56.183000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 23 18:26:56.185747 augenrules[1813]: No rules Jan 23 18:26:56.188272 kernel: kauditd_printk_skb: 40 callbacks suppressed Jan 23 18:26:56.188335 kernel: audit: type=1305 audit(1769192816.183:227): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 23 18:26:56.190657 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:26:56.191179 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:26:56.192883 sudo[1788]: pam_unix(sudo:session): session closed for user root Jan 23 18:26:56.183000 audit[1813]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffd7f55770 a2=420 a3=0 items=0 ppid=1794 pid=1813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:26:56.196801 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Jan 23 18:26:56.197115 sshd[1787]: Connection closed by 10.0.0.1 port 58128 Jan 23 18:26:56.213074 kernel: audit: type=1300 audit(1769192816.183:227): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffd7f55770 a2=420 a3=0 items=0 ppid=1794 pid=1813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:26:56.183000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 18:26:56.220202 kernel: audit: type=1327 audit(1769192816.183:227): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 18:26:56.220264 kernel: audit: type=1130 audit(1769192816.187:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:56.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:56.232062 kernel: audit: type=1131 audit(1769192816.187:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:56.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:56.243539 kernel: audit: type=1106 audit(1769192816.187:230): pid=1788 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:26:56.187000 audit[1788]: USER_END pid=1788 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:26:56.249162 systemd[1]: sshd@5-10.0.0.29:22-10.0.0.1:58128.service: Deactivated successfully. Jan 23 18:26:56.251750 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 18:26:56.253592 systemd-logind[1586]: Session 7 logged out. Waiting for processes to exit. Jan 23 18:26:56.257186 systemd[1]: Started sshd@6-10.0.0.29:22-10.0.0.1:58134.service - OpenSSH per-connection server daemon (10.0.0.1:58134). Jan 23 18:26:56.258073 systemd-logind[1586]: Removed session 7. Jan 23 18:26:56.187000 audit[1788]: CRED_DISP pid=1788 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:26:56.277607 kernel: audit: type=1104 audit(1769192816.187:231): pid=1788 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:26:56.277704 kernel: audit: type=1106 audit(1769192816.195:232): pid=1783 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:26:56.195000 audit[1783]: USER_END pid=1783 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:26:56.302787 kernel: audit: type=1104 audit(1769192816.195:233): pid=1783 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:26:56.195000 audit[1783]: CRED_DISP pid=1783 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:26:56.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.29:22-10.0.0.1:58128 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:56.333593 kernel: audit: type=1131 audit(1769192816.247:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.29:22-10.0.0.1:58128 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:56.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.29:22-10.0.0.1:58134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:26:56.402000 audit[1822]: USER_ACCT pid=1822 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:26:56.478856 sshd[1822]: Accepted publickey for core from 10.0.0.1 port 58134 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:26:56.478000 audit[1822]: CRED_ACQ pid=1822 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:26:56.478000 audit[1822]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe092c4e0 a2=3 a3=0 items=0 ppid=1 pid=1822 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:26:56.478000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:26:56.507016 sshd-session[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:26:56.602734 systemd-logind[1586]: New session 8 of user core. Jan 23 18:26:56.610748 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 18:26:56.614000 audit[1822]: USER_START pid=1822 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:26:56.619000 audit[1826]: CRED_ACQ pid=1826 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:26:56.647000 audit[1827]: USER_ACCT pid=1827 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:26:56.648549 sudo[1827]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 18:26:56.648000 audit[1827]: CRED_REFR pid=1827 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:26:56.649320 sudo[1827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:26:56.649000 audit[1827]: USER_START pid=1827 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:27:00.327235 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 18:27:00.372174 (dockerd)[1849]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 18:27:02.136818 dockerd[1849]: time="2026-01-23T18:27:02.136305892Z" level=info msg="Starting up" Jan 23 18:27:02.138782 dockerd[1849]: time="2026-01-23T18:27:02.138655579Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 18:27:02.206706 dockerd[1849]: time="2026-01-23T18:27:02.206580894Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 18:27:03.013222 systemd[1]: var-lib-docker-metacopy\x2dcheck531289163-merged.mount: Deactivated successfully. Jan 23 18:27:03.095233 dockerd[1849]: time="2026-01-23T18:27:03.094693791Z" level=info msg="Loading containers: start." Jan 23 18:27:03.129631 kernel: Initializing XFRM netlink socket Jan 23 18:27:03.350000 audit[1903]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:03.355525 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 23 18:27:03.355692 kernel: audit: type=1325 audit(1769192823.350:244): table=nat:2 family=2 entries=2 op=nft_register_chain pid=1903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:03.350000 audit[1903]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffcb613a030 a2=0 a3=0 items=0 ppid=1849 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:03.381275 kernel: audit: type=1300 audit(1769192823.350:244): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffcb613a030 a2=0 a3=0 items=0 ppid=1849 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:03.381492 kernel: audit: type=1327 audit(1769192823.350:244): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 23 18:27:03.350000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 23 18:27:03.360000 audit[1905]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1905 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:03.402374 kernel: audit: type=1325 audit(1769192823.360:245): table=filter:3 family=2 entries=2 op=nft_register_chain pid=1905 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:03.360000 audit[1905]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc7f9a1ef0 a2=0 a3=0 items=0 ppid=1849 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:03.420337 kernel: audit: type=1300 audit(1769192823.360:245): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc7f9a1ef0 a2=0 a3=0 items=0 ppid=1849 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:03.420755 kernel: audit: type=1327 audit(1769192823.360:245): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 23 18:27:03.360000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 23 18:27:03.428782 kernel: audit: type=1325 audit(1769192823.367:246): table=filter:4 family=2 entries=1 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:03.367000 audit[1907]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:03.439674 kernel: audit: type=1300 audit(1769192823.367:246): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffa857930 a2=0 a3=0 items=0 ppid=1849 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:03.367000 audit[1907]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffa857930 a2=0 a3=0 items=0 ppid=1849 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:03.367000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 23 18:27:03.466375 kernel: audit: type=1327 audit(1769192823.367:246): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 23 18:27:03.466551 kernel: audit: type=1325 audit(1769192823.375:247): table=filter:5 family=2 entries=1 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:03.375000 audit[1909]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:03.375000 audit[1909]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe14bef000 a2=0 a3=0 items=0 ppid=1849 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:03.375000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 23 18:27:03.383000 audit[1911]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:03.383000 audit[1911]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdbce1ec70 a2=0 a3=0 items=0 ppid=1849 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:03.383000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 23 18:27:03.392000 audit[1913]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:03.392000 audit[1913]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffefe9fc900 a2=0 a3=0 items=0 ppid=1849 pid=1913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:03.392000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 18:27:03.400000 audit[1915]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:03.400000 audit[1915]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe83ce2c40 a2=0 a3=0 items=0 ppid=1849 pid=1915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:03.400000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 18:27:03.409000 audit[1917]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1917 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:03.409000 audit[1917]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff71aa4a20 a2=0 a3=0 items=0 ppid=1849 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:03.409000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 23 18:27:03.542000 audit[1920]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:03.542000 audit[1920]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffe7eb54eb0 a2=0 a3=0 items=0 ppid=1849 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:03.542000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 23 18:27:03.551000 audit[1922]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:03.551000 audit[1922]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc19311d20 a2=0 a3=0 items=0 ppid=1849 pid=1922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:03.551000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 23 18:27:03.558000 audit[1924]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:03.558000 audit[1924]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fffc7827960 a2=0 a3=0 items=0 ppid=1849 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:03.558000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 23 18:27:03.566000 audit[1926]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1926 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:03.566000 audit[1926]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffebd222410 a2=0 a3=0 items=0 ppid=1849 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:03.566000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 18:27:03.575000 audit[1928]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1928 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:03.575000 audit[1928]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd56232e70 a2=0 a3=0 items=0 ppid=1849 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:03.575000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 23 18:27:03.935000 audit[1958]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1958 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:27:03.935000 audit[1958]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff3e415070 a2=0 a3=0 items=0 ppid=1849 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:03.935000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 23 18:27:03.942000 audit[1960]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1960 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:27:03.942000 audit[1960]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd2fd9ddb0 a2=0 a3=0 items=0 ppid=1849 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:03.942000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 23 18:27:03.948000 audit[1962]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:27:03.948000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffff0628c0 a2=0 a3=0 items=0 ppid=1849 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:03.948000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 23 18:27:03.954000 audit[1964]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:27:03.954000 audit[1964]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd34f9f770 a2=0 a3=0 items=0 ppid=1849 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:03.954000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 23 18:27:03.962000 audit[1966]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:27:03.962000 audit[1966]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc521c5660 a2=0 a3=0 items=0 ppid=1849 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:03.962000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 23 18:27:03.968000 audit[1968]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:27:03.968000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff1c2809a0 a2=0 a3=0 items=0 ppid=1849 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:03.968000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 18:27:03.979000 audit[1970]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:27:03.979000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff4c86b8f0 a2=0 a3=0 items=0 ppid=1849 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:03.979000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 18:27:03.988000 audit[1972]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:27:03.988000 audit[1972]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffdb9a5b5b0 a2=0 a3=0 items=0 ppid=1849 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:03.988000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 23 18:27:03.999000 audit[1974]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:27:03.999000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffefcba71b0 a2=0 a3=0 items=0 ppid=1849 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:03.999000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 23 18:27:04.007000 audit[1976]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:27:04.007000 audit[1976]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdf6716090 a2=0 a3=0 items=0 ppid=1849 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:04.007000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 23 18:27:04.014000 audit[1978]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:27:04.014000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffede5492e0 a2=0 a3=0 items=0 ppid=1849 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:04.014000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 23 18:27:04.022000 audit[1980]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:27:04.022000 audit[1980]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffa36094b0 a2=0 a3=0 items=0 ppid=1849 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:04.022000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 18:27:04.028000 audit[1982]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:27:04.028000 audit[1982]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc21aa1880 a2=0 a3=0 items=0 ppid=1849 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:04.028000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 23 18:27:04.055000 audit[1987]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1987 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:04.055000 audit[1987]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc2bb83df0 a2=0 a3=0 items=0 ppid=1849 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:04.055000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 23 18:27:04.062000 audit[1989]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1989 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:04.062000 audit[1989]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc30617420 a2=0 a3=0 items=0 ppid=1849 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:04.062000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 23 18:27:04.070000 audit[1991]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1991 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:04.070000 audit[1991]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd5cc53d60 a2=0 a3=0 items=0 ppid=1849 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:04.070000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 23 18:27:04.077000 audit[1993]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:27:04.077000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffda8c28ee0 a2=0 a3=0 items=0 ppid=1849 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:04.077000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 23 18:27:04.086000 audit[1995]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:27:04.086000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffff6178d10 a2=0 a3=0 items=0 ppid=1849 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:04.086000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 23 18:27:04.092000 audit[1997]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=1997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:27:04.092000 audit[1997]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff276454e0 a2=0 a3=0 items=0 ppid=1849 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:04.092000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 23 18:27:04.155000 audit[2002]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2002 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:04.155000 audit[2002]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffda0e2c630 a2=0 a3=0 items=0 ppid=1849 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:04.155000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 23 18:27:04.162000 audit[2004]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2004 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:04.162000 audit[2004]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff5c564e50 a2=0 a3=0 items=0 ppid=1849 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:04.162000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 23 18:27:04.167979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 18:27:04.170856 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:27:04.192000 audit[2013]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:04.192000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffd0ce3a8b0 a2=0 a3=0 items=0 ppid=1849 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:04.192000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 23 18:27:04.221000 audit[2021]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:04.221000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe9259cd20 a2=0 a3=0 items=0 ppid=1849 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:04.221000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 23 18:27:04.231000 audit[2023]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:04.231000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fff118ca710 a2=0 a3=0 items=0 ppid=1849 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:04.231000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 23 18:27:04.245000 audit[2025]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:04.245000 audit[2025]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc8df76d80 a2=0 a3=0 items=0 ppid=1849 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:04.245000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 23 18:27:04.252000 audit[2027]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:04.252000 audit[2027]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc19e7af00 a2=0 a3=0 items=0 ppid=1849 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:04.252000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 18:27:04.263000 audit[2029]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:04.263000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe0a2c6620 a2=0 a3=0 items=0 ppid=1849 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:04.263000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 23 18:27:04.266618 systemd-networkd[1517]: docker0: Link UP Jan 23 18:27:04.287874 dockerd[1849]: time="2026-01-23T18:27:04.287606873Z" level=info msg="Loading containers: done." Jan 23 18:27:04.359523 dockerd[1849]: time="2026-01-23T18:27:04.359173472Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 18:27:04.359915 dockerd[1849]: time="2026-01-23T18:27:04.359584450Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 18:27:04.360525 dockerd[1849]: time="2026-01-23T18:27:04.360128046Z" level=info msg="Initializing buildkit" Jan 23 18:27:04.434078 dockerd[1849]: time="2026-01-23T18:27:04.433869050Z" level=info msg="Completed buildkit initialization" Jan 23 18:27:04.453849 dockerd[1849]: time="2026-01-23T18:27:04.453608231Z" level=info msg="Daemon has completed initialization" Jan 23 18:27:04.454143 dockerd[1849]: time="2026-01-23T18:27:04.454021001Z" level=info msg="API listen on /run/docker.sock" Jan 23 18:27:04.454345 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 18:27:04.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:27:05.576109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:27:05.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:27:05.615998 (kubelet)[2075]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:27:05.853746 kubelet[2075]: E0123 18:27:05.853269 2075 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:27:05.864762 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:27:05.865184 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:27:05.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:27:05.867725 systemd[1]: kubelet.service: Consumed 1.344s CPU time, 110.5M memory peak. Jan 23 18:27:08.320114 containerd[1632]: time="2026-01-23T18:27:08.317640033Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 18:27:09.590239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1331811598.mount: Deactivated successfully. Jan 23 18:27:15.999917 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 18:27:16.009368 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:27:16.710474 containerd[1632]: time="2026-01-23T18:27:16.710197351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:27:16.712529 containerd[1632]: time="2026-01-23T18:27:16.712312904Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30106657" Jan 23 18:27:16.714028 containerd[1632]: time="2026-01-23T18:27:16.713923996Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:27:16.717697 containerd[1632]: time="2026-01-23T18:27:16.717360902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:27:16.719210 containerd[1632]: time="2026-01-23T18:27:16.719061387Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 8.401288772s" Jan 23 18:27:16.719344 containerd[1632]: time="2026-01-23T18:27:16.719223613Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 23 18:27:16.722769 containerd[1632]: time="2026-01-23T18:27:16.722622965Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 18:27:16.859925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:27:16.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:27:16.864141 kernel: kauditd_printk_skb: 113 callbacks suppressed Jan 23 18:27:16.864237 kernel: audit: type=1130 audit(1769192836.858:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:27:16.885843 (kubelet)[2153]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:27:17.305592 kubelet[2153]: E0123 18:27:17.304940 2153 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:27:17.310716 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:27:17.310987 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:27:17.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:27:17.312171 systemd[1]: kubelet.service: Consumed 1.094s CPU time, 110.6M memory peak. Jan 23 18:27:17.323613 kernel: audit: type=1131 audit(1769192837.311:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:27:20.381169 containerd[1632]: time="2026-01-23T18:27:20.380140068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:27:20.385208 containerd[1632]: time="2026-01-23T18:27:20.382517119Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 23 18:27:20.385208 containerd[1632]: time="2026-01-23T18:27:20.384929663Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:27:20.391710 containerd[1632]: time="2026-01-23T18:27:20.391557322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:27:20.393363 containerd[1632]: time="2026-01-23T18:27:20.393226940Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 3.670300546s" Jan 23 18:27:20.393363 containerd[1632]: time="2026-01-23T18:27:20.393319809Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 23 18:27:20.397347 containerd[1632]: time="2026-01-23T18:27:20.397000975Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 18:27:23.334477 containerd[1632]: time="2026-01-23T18:27:23.333015720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:27:23.337743 containerd[1632]: time="2026-01-23T18:27:23.335117195Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Jan 23 18:27:23.337743 containerd[1632]: time="2026-01-23T18:27:23.336895727Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:27:23.342657 containerd[1632]: time="2026-01-23T18:27:23.342262950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:27:23.345473 containerd[1632]: time="2026-01-23T18:27:23.345255726Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 2.948208105s" Jan 23 18:27:23.345473 containerd[1632]: time="2026-01-23T18:27:23.345369355Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 23 18:27:23.348736 containerd[1632]: time="2026-01-23T18:27:23.348614720Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 18:27:27.122623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount333061993.mount: Deactivated successfully. Jan 23 18:27:27.422871 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 18:27:27.427851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:27:27.795701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:27:27.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:27:27.804584 kernel: audit: type=1130 audit(1769192847.795:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:27:27.817192 (kubelet)[2185]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:27:28.299434 kubelet[2185]: E0123 18:27:28.299100 2185 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:27:28.305964 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:27:28.306318 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:27:28.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:27:28.307873 systemd[1]: kubelet.service: Consumed 862ms CPU time, 108.8M memory peak. Jan 23 18:27:28.316437 kernel: audit: type=1131 audit(1769192848.306:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:27:28.752277 containerd[1632]: time="2026-01-23T18:27:28.751245596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:27:28.752277 containerd[1632]: time="2026-01-23T18:27:28.752139377Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Jan 23 18:27:28.755901 containerd[1632]: time="2026-01-23T18:27:28.754162055Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:27:28.757582 containerd[1632]: time="2026-01-23T18:27:28.757515250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:27:28.758734 containerd[1632]: time="2026-01-23T18:27:28.758668675Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 5.410025527s" Jan 23 18:27:28.758734 containerd[1632]: time="2026-01-23T18:27:28.758731098Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 23 18:27:28.761944 containerd[1632]: time="2026-01-23T18:27:28.761897016Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 18:27:29.539955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3949528971.mount: Deactivated successfully. Jan 23 18:27:31.869969 update_engine[1588]: I20260123 18:27:31.867463 1588 update_attempter.cc:509] Updating boot flags... Jan 23 18:27:32.426354 containerd[1632]: time="2026-01-23T18:27:32.425517004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:27:32.428580 containerd[1632]: time="2026-01-23T18:27:32.428555344Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20581182" Jan 23 18:27:32.430144 containerd[1632]: time="2026-01-23T18:27:32.430118075Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:27:32.435642 containerd[1632]: time="2026-01-23T18:27:32.435289329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:27:32.439023 containerd[1632]: time="2026-01-23T18:27:32.438952979Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.677028348s" Jan 23 18:27:32.439103 containerd[1632]: time="2026-01-23T18:27:32.439024769Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 23 18:27:32.457928 containerd[1632]: time="2026-01-23T18:27:32.457898590Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 18:27:32.987071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3467092975.mount: Deactivated successfully. Jan 23 18:27:33.001967 containerd[1632]: time="2026-01-23T18:27:33.001826592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:27:33.003282 containerd[1632]: time="2026-01-23T18:27:33.003162030Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 18:27:33.004712 containerd[1632]: time="2026-01-23T18:27:33.004567435Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:27:33.007212 containerd[1632]: time="2026-01-23T18:27:33.007109452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:27:33.007881 containerd[1632]: time="2026-01-23T18:27:33.007787713Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 549.75553ms" Jan 23 18:27:33.007881 containerd[1632]: time="2026-01-23T18:27:33.007844012Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 18:27:33.010597 containerd[1632]: time="2026-01-23T18:27:33.010537612Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 18:27:33.668244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2576535382.mount: Deactivated successfully. Jan 23 18:27:38.273906 containerd[1632]: time="2026-01-23T18:27:38.272797336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:27:38.276872 containerd[1632]: time="2026-01-23T18:27:38.274727410Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=46127678" Jan 23 18:27:38.276872 containerd[1632]: time="2026-01-23T18:27:38.276073438Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:27:38.279975 containerd[1632]: time="2026-01-23T18:27:38.279901630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:27:38.281514 containerd[1632]: time="2026-01-23T18:27:38.281349366Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 5.270774862s" Jan 23 18:27:38.281514 containerd[1632]: time="2026-01-23T18:27:38.281463918Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 23 18:27:38.513153 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 18:27:38.671162 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:27:39.280130 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:27:39.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:27:39.291571 kernel: audit: type=1130 audit(1769192859.279:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:27:39.416691 (kubelet)[2335]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:27:40.083186 kubelet[2335]: E0123 18:27:40.082777 2335 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:27:40.180237 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:27:40.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:27:40.180708 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:27:40.181641 systemd[1]: kubelet.service: Consumed 1.042s CPU time, 110.5M memory peak. Jan 23 18:27:40.190484 kernel: audit: type=1131 audit(1769192860.180:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:27:42.611109 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:27:42.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:27:42.611512 systemd[1]: kubelet.service: Consumed 1.042s CPU time, 110.5M memory peak. Jan 23 18:27:42.615181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:27:42.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:27:42.628174 kernel: audit: type=1130 audit(1769192862.609:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:27:42.628367 kernel: audit: type=1131 audit(1769192862.610:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:27:42.652930 systemd[1]: Reload requested from client PID 2369 ('systemctl') (unit session-8.scope)... Jan 23 18:27:42.653046 systemd[1]: Reloading... Jan 23 18:27:42.771475 zram_generator::config[2414]: No configuration found. Jan 23 18:27:43.208056 systemd[1]: Reloading finished in 549 ms. Jan 23 18:27:43.244000 audit: BPF prog-id=63 op=LOAD Jan 23 18:27:43.251909 kernel: audit: type=1334 audit(1769192863.244:295): prog-id=63 op=LOAD Jan 23 18:27:43.251995 kernel: audit: type=1334 audit(1769192863.244:296): prog-id=43 op=UNLOAD Jan 23 18:27:43.244000 audit: BPF prog-id=43 op=UNLOAD Jan 23 18:27:43.245000 audit: BPF prog-id=64 op=LOAD Jan 23 18:27:43.254688 kernel: audit: type=1334 audit(1769192863.245:297): prog-id=64 op=LOAD Jan 23 18:27:43.254732 kernel: audit: type=1334 audit(1769192863.245:298): prog-id=65 op=LOAD Jan 23 18:27:43.245000 audit: BPF prog-id=65 op=LOAD Jan 23 18:27:43.245000 audit: BPF prog-id=44 op=UNLOAD Jan 23 18:27:43.259698 kernel: audit: type=1334 audit(1769192863.245:299): prog-id=44 op=UNLOAD Jan 23 18:27:43.259742 kernel: audit: type=1334 audit(1769192863.245:300): prog-id=45 op=UNLOAD Jan 23 18:27:43.245000 audit: BPF prog-id=45 op=UNLOAD Jan 23 18:27:43.246000 audit: BPF prog-id=66 op=LOAD Jan 23 18:27:43.246000 audit: BPF prog-id=46 op=UNLOAD Jan 23 18:27:43.246000 audit: BPF prog-id=67 op=LOAD Jan 23 18:27:43.246000 audit: BPF prog-id=68 op=LOAD Jan 23 18:27:43.246000 audit: BPF prog-id=47 op=UNLOAD Jan 23 18:27:43.246000 audit: BPF prog-id=48 op=UNLOAD Jan 23 18:27:43.248000 audit: BPF prog-id=69 op=LOAD Jan 23 18:27:43.248000 audit: BPF prog-id=70 op=LOAD Jan 23 18:27:43.248000 audit: BPF prog-id=56 op=UNLOAD Jan 23 18:27:43.248000 audit: BPF prog-id=57 op=UNLOAD Jan 23 18:27:43.250000 audit: BPF prog-id=71 op=LOAD Jan 23 18:27:43.250000 audit: BPF prog-id=58 op=UNLOAD Jan 23 18:27:43.252000 audit: BPF prog-id=72 op=LOAD Jan 23 18:27:43.264000 audit: BPF prog-id=60 op=UNLOAD Jan 23 18:27:43.264000 audit: BPF prog-id=73 op=LOAD Jan 23 18:27:43.264000 audit: BPF prog-id=74 op=LOAD Jan 23 18:27:43.264000 audit: BPF prog-id=61 op=UNLOAD Jan 23 18:27:43.264000 audit: BPF prog-id=62 op=UNLOAD Jan 23 18:27:43.266000 audit: BPF prog-id=75 op=LOAD Jan 23 18:27:43.266000 audit: BPF prog-id=52 op=UNLOAD Jan 23 18:27:43.268000 audit: BPF prog-id=76 op=LOAD Jan 23 18:27:43.268000 audit: BPF prog-id=49 op=UNLOAD Jan 23 18:27:43.269000 audit: BPF prog-id=77 op=LOAD Jan 23 18:27:43.269000 audit: BPF prog-id=78 op=LOAD Jan 23 18:27:43.269000 audit: BPF prog-id=50 op=UNLOAD Jan 23 18:27:43.269000 audit: BPF prog-id=51 op=UNLOAD Jan 23 18:27:43.270000 audit: BPF prog-id=79 op=LOAD Jan 23 18:27:43.270000 audit: BPF prog-id=59 op=UNLOAD Jan 23 18:27:43.272000 audit: BPF prog-id=80 op=LOAD Jan 23 18:27:43.272000 audit: BPF prog-id=53 op=UNLOAD Jan 23 18:27:43.272000 audit: BPF prog-id=81 op=LOAD Jan 23 18:27:43.272000 audit: BPF prog-id=82 op=LOAD Jan 23 18:27:43.272000 audit: BPF prog-id=54 op=UNLOAD Jan 23 18:27:43.272000 audit: BPF prog-id=55 op=UNLOAD Jan 23 18:27:43.380592 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 18:27:43.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:27:43.380751 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 18:27:43.381324 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:27:43.381449 systemd[1]: kubelet.service: Consumed 194ms CPU time, 98.4M memory peak. Jan 23 18:27:43.384131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:27:43.783280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:27:43.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:27:43.797893 (kubelet)[2462]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:27:43.925280 kubelet[2462]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:27:43.925280 kubelet[2462]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:27:43.925280 kubelet[2462]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:27:43.925855 kubelet[2462]: I0123 18:27:43.925324 2462 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:27:44.235628 kubelet[2462]: I0123 18:27:44.235355 2462 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 18:27:44.235628 kubelet[2462]: I0123 18:27:44.235496 2462 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:27:44.239247 kubelet[2462]: I0123 18:27:44.238811 2462 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 18:27:44.268276 kubelet[2462]: I0123 18:27:44.268217 2462 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:27:44.268752 kubelet[2462]: E0123 18:27:44.268668 2462 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.29:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.29:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 18:27:44.284806 kubelet[2462]: I0123 18:27:44.284723 2462 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:27:44.295314 kubelet[2462]: I0123 18:27:44.295169 2462 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 18:27:44.295960 kubelet[2462]: I0123 18:27:44.295846 2462 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:27:44.296564 kubelet[2462]: I0123 18:27:44.295901 2462 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:27:44.297275 kubelet[2462]: I0123 18:27:44.296582 2462 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:27:44.297275 kubelet[2462]: I0123 18:27:44.296611 2462 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 18:27:44.297275 kubelet[2462]: I0123 18:27:44.297015 2462 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:27:44.300769 kubelet[2462]: I0123 18:27:44.300672 2462 kubelet.go:480] "Attempting to sync node with API server" Jan 23 18:27:44.300769 kubelet[2462]: I0123 18:27:44.300711 2462 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:27:44.300846 kubelet[2462]: I0123 18:27:44.300809 2462 kubelet.go:386] "Adding apiserver pod source" Jan 23 18:27:44.303524 kubelet[2462]: I0123 18:27:44.302226 2462 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:27:44.309040 kubelet[2462]: I0123 18:27:44.308908 2462 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 18:27:44.313434 kubelet[2462]: E0123 18:27:44.312606 2462 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 18:27:44.313434 kubelet[2462]: I0123 18:27:44.312717 2462 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 18:27:44.313434 kubelet[2462]: E0123 18:27:44.312825 2462 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.29:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 18:27:44.314752 kubelet[2462]: W0123 18:27:44.314712 2462 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 18:27:44.321287 kubelet[2462]: I0123 18:27:44.321221 2462 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 18:27:44.321480 kubelet[2462]: I0123 18:27:44.321360 2462 server.go:1289] "Started kubelet" Jan 23 18:27:44.323591 kubelet[2462]: I0123 18:27:44.323191 2462 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:27:44.325252 kubelet[2462]: I0123 18:27:44.324850 2462 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:27:44.325252 kubelet[2462]: I0123 18:27:44.325108 2462 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:27:44.328339 kubelet[2462]: E0123 18:27:44.328306 2462 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:27:44.328626 kubelet[2462]: I0123 18:27:44.328577 2462 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 18:27:44.329039 kubelet[2462]: I0123 18:27:44.328989 2462 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:27:44.329319 kubelet[2462]: E0123 18:27:44.327595 2462 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.29:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.29:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d6f8959067b55 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 18:27:44.321272661 +0000 UTC m=+0.493040415,LastTimestamp:2026-01-23 18:27:44.321272661 +0000 UTC m=+0.493040415,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 18:27:44.330001 kubelet[2462]: E0123 18:27:44.329944 2462 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:27:44.332234 kubelet[2462]: I0123 18:27:44.332176 2462 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:27:44.332709 kubelet[2462]: E0123 18:27:44.332667 2462 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.29:6443: connect: connection refused" interval="200ms" Jan 23 18:27:44.332831 kubelet[2462]: I0123 18:27:44.324846 2462 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:27:44.333057 kubelet[2462]: I0123 18:27:44.332992 2462 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 18:27:44.333812 kubelet[2462]: E0123 18:27:44.333745 2462 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 18:27:44.334361 kubelet[2462]: I0123 18:27:44.334263 2462 reconciler.go:26] "Reconciler: start to sync state" Jan 23 18:27:44.335916 kubelet[2462]: I0123 18:27:44.335784 2462 server.go:317] "Adding debug handlers to kubelet server" Jan 23 18:27:44.336500 kubelet[2462]: I0123 18:27:44.336356 2462 factory.go:223] Registration of the containerd container factory successfully Jan 23 18:27:44.336500 kubelet[2462]: I0123 18:27:44.336492 2462 factory.go:223] Registration of the systemd container factory successfully Jan 23 18:27:44.341000 audit[2479]: NETFILTER_CFG table=mangle:42 family=10 entries=2 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:27:44.344441 kubelet[2462]: I0123 18:27:44.343655 2462 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 18:27:44.345120 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 23 18:27:44.345169 kernel: audit: type=1325 audit(1769192864.341:337): table=mangle:42 family=10 entries=2 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:27:44.341000 audit[2479]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe0aab5d00 a2=0 a3=0 items=0 ppid=2462 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:44.360633 kubelet[2462]: I0123 18:27:44.357844 2462 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:27:44.360633 kubelet[2462]: I0123 18:27:44.357857 2462 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:27:44.360633 kubelet[2462]: I0123 18:27:44.357874 2462 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:27:44.361227 kernel: audit: type=1300 audit(1769192864.341:337): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe0aab5d00 a2=0 a3=0 items=0 ppid=2462 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:44.361287 kernel: audit: type=1327 audit(1769192864.341:337): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 23 18:27:44.341000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 23 18:27:44.345000 audit[2481]: NETFILTER_CFG table=mangle:43 family=2 entries=2 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:44.371978 kernel: audit: type=1325 audit(1769192864.345:338): table=mangle:43 family=2 entries=2 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:44.372027 kernel: audit: type=1300 audit(1769192864.345:338): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd1f0aaff0 a2=0 a3=0 items=0 ppid=2462 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:44.345000 audit[2481]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd1f0aaff0 a2=0 a3=0 items=0 ppid=2462 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:44.345000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 23 18:27:44.388292 kernel: audit: type=1327 audit(1769192864.345:338): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 23 18:27:44.388337 kernel: audit: type=1325 audit(1769192864.348:339): table=mangle:44 family=10 entries=1 op=nft_register_chain pid=2482 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:27:44.348000 audit[2482]: NETFILTER_CFG table=mangle:44 family=10 entries=1 op=nft_register_chain pid=2482 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:27:44.394014 kernel: audit: type=1300 audit(1769192864.348:339): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe49928700 a2=0 a3=0 items=0 ppid=2462 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:44.348000 audit[2482]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe49928700 a2=0 a3=0 items=0 ppid=2462 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:44.348000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 23 18:27:44.410140 kernel: audit: type=1327 audit(1769192864.348:339): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 23 18:27:44.410174 kernel: audit: type=1325 audit(1769192864.349:340): table=filter:45 family=2 entries=1 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:44.349000 audit[2484]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:44.349000 audit[2484]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc5fc29760 a2=0 a3=0 items=0 ppid=2462 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:44.349000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 23 18:27:44.351000 audit[2486]: NETFILTER_CFG table=nat:46 family=10 entries=1 op=nft_register_chain pid=2486 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:27:44.351000 audit[2486]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd91bf7cb0 a2=0 a3=0 items=0 ppid=2462 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:44.351000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 23 18:27:44.354000 audit[2489]: NETFILTER_CFG table=filter:47 family=10 entries=1 op=nft_register_chain pid=2489 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:27:44.354000 audit[2489]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe03708410 a2=0 a3=0 items=0 ppid=2462 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:44.354000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 23 18:27:44.356000 audit[2490]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:44.356000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffefbbd8e00 a2=0 a3=0 items=0 ppid=2462 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:44.356000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 18:27:44.362000 audit[2492]: NETFILTER_CFG table=filter:49 family=2 entries=2 op=nft_register_chain pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:44.362000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc230d0fd0 a2=0 a3=0 items=0 ppid=2462 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:44.362000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 18:27:44.419028 kubelet[2462]: I0123 18:27:44.418901 2462 policy_none.go:49] "None policy: Start" Jan 23 18:27:44.419028 kubelet[2462]: I0123 18:27:44.418987 2462 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 18:27:44.419157 kubelet[2462]: I0123 18:27:44.419048 2462 state_mem.go:35] "Initializing new in-memory state store" Jan 23 18:27:44.426000 audit[2496]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2496 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:44.426000 audit[2496]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc57be29d0 a2=0 a3=0 items=0 ppid=2462 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:44.426000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 23 18:27:44.428377 kubelet[2462]: I0123 18:27:44.428050 2462 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 18:27:44.428377 kubelet[2462]: I0123 18:27:44.428169 2462 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 18:27:44.428377 kubelet[2462]: I0123 18:27:44.428230 2462 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:27:44.428377 kubelet[2462]: I0123 18:27:44.428257 2462 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 18:27:44.428377 kubelet[2462]: E0123 18:27:44.428361 2462 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:27:44.430169 kubelet[2462]: E0123 18:27:44.430044 2462 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:27:44.430248 kubelet[2462]: E0123 18:27:44.430221 2462 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 18:27:44.430000 audit[2497]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=2497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:44.430000 audit[2497]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd6683d6b0 a2=0 a3=0 items=0 ppid=2462 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:44.430000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 23 18:27:44.432737 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 18:27:44.434000 audit[2499]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:44.434000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd747de9f0 a2=0 a3=0 items=0 ppid=2462 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:44.434000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 23 18:27:44.436000 audit[2500]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:27:44.436000 audit[2500]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb2b72070 a2=0 a3=0 items=0 ppid=2462 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:44.436000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 23 18:27:44.448676 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 18:27:44.454171 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 18:27:44.469696 kubelet[2462]: E0123 18:27:44.469231 2462 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 18:27:44.469696 kubelet[2462]: I0123 18:27:44.469595 2462 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:27:44.469696 kubelet[2462]: I0123 18:27:44.469661 2462 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:27:44.470282 kubelet[2462]: I0123 18:27:44.470178 2462 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:27:44.472605 kubelet[2462]: E0123 18:27:44.472507 2462 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:27:44.472862 kubelet[2462]: E0123 18:27:44.472773 2462 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 18:27:44.536196 kubelet[2462]: E0123 18:27:44.534588 2462 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.29:6443: connect: connection refused" interval="400ms" Jan 23 18:27:44.536196 kubelet[2462]: I0123 18:27:44.535267 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/730a986346c3d53ccd4bbe116df06297-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"730a986346c3d53ccd4bbe116df06297\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:27:44.536196 kubelet[2462]: I0123 18:27:44.535309 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/730a986346c3d53ccd4bbe116df06297-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"730a986346c3d53ccd4bbe116df06297\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:27:44.536196 kubelet[2462]: I0123 18:27:44.535340 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/730a986346c3d53ccd4bbe116df06297-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"730a986346c3d53ccd4bbe116df06297\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:27:44.557936 systemd[1]: Created slice kubepods-burstable-pod730a986346c3d53ccd4bbe116df06297.slice - libcontainer container kubepods-burstable-pod730a986346c3d53ccd4bbe116df06297.slice. Jan 23 18:27:44.571825 kubelet[2462]: I0123 18:27:44.571770 2462 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:27:44.572978 kubelet[2462]: E0123 18:27:44.572587 2462 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.29:6443/api/v1/nodes\": dial tcp 10.0.0.29:6443: connect: connection refused" node="localhost" Jan 23 18:27:44.575283 kubelet[2462]: E0123 18:27:44.574750 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:27:44.577653 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 23 18:27:44.581609 kubelet[2462]: E0123 18:27:44.581549 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:27:44.585173 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 23 18:27:44.587647 kubelet[2462]: E0123 18:27:44.587583 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:27:44.647863 kubelet[2462]: I0123 18:27:44.647719 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:27:44.647863 kubelet[2462]: I0123 18:27:44.647834 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:27:44.647863 kubelet[2462]: I0123 18:27:44.647864 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:27:44.649697 kubelet[2462]: I0123 18:27:44.647953 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:27:44.649697 kubelet[2462]: I0123 18:27:44.649677 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:27:44.649786 kubelet[2462]: I0123 18:27:44.649742 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 23 18:27:44.797456 kubelet[2462]: I0123 18:27:44.796001 2462 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:27:44.797456 kubelet[2462]: E0123 18:27:44.797176 2462 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.29:6443/api/v1/nodes\": dial tcp 10.0.0.29:6443: connect: connection refused" node="localhost" Jan 23 18:27:44.882501 kubelet[2462]: E0123 18:27:44.881618 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:44.883987 kubelet[2462]: E0123 18:27:44.883486 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:44.886187 containerd[1632]: time="2026-01-23T18:27:44.886057772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 23 18:27:44.887202 containerd[1632]: time="2026-01-23T18:27:44.886191816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:730a986346c3d53ccd4bbe116df06297,Namespace:kube-system,Attempt:0,}" Jan 23 18:27:44.889366 kubelet[2462]: E0123 18:27:44.888939 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:44.890220 containerd[1632]: time="2026-01-23T18:27:44.890129972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 23 18:27:44.936495 kubelet[2462]: E0123 18:27:44.936321 2462 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.29:6443: connect: connection refused" interval="800ms" Jan 23 18:27:44.944160 containerd[1632]: time="2026-01-23T18:27:44.944041371Z" level=info msg="connecting to shim b922349852f1c7a5d2230c36b6cd6fd6a28c23a8df5a57af2bfd77acd3848cbb" address="unix:///run/containerd/s/d248ea88de8b6708d9e070886e5bdb66941f5c11c864103ebe66e3c3abf5a02f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:27:44.962766 containerd[1632]: time="2026-01-23T18:27:44.962541868Z" level=info msg="connecting to shim f221b109ac31ce4c4f01edbef9fccff7528cd0628c640440f96a71aeda174b83" address="unix:///run/containerd/s/406e283d8e952ca1d8658f3c18c6f5f46ce41ef29354ca389b8923ea3e02b673" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:27:44.963364 containerd[1632]: time="2026-01-23T18:27:44.963266969Z" level=info msg="connecting to shim fcc35d35015d4540f78ee98c11ed4e68a009fac6efd685d3cd62d2c498afc4f5" address="unix:///run/containerd/s/e58a70be62665d265f96f7f6745765cfcd8fd540bfb1ca3d3888db1e4e12291a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:27:45.056356 systemd[1]: Started cri-containerd-fcc35d35015d4540f78ee98c11ed4e68a009fac6efd685d3cd62d2c498afc4f5.scope - libcontainer container fcc35d35015d4540f78ee98c11ed4e68a009fac6efd685d3cd62d2c498afc4f5. Jan 23 18:27:45.080000 audit: BPF prog-id=83 op=LOAD Jan 23 18:27:45.081000 audit: BPF prog-id=84 op=LOAD Jan 23 18:27:45.081000 audit[2557]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=2521 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663633335643335303135643435343066373865653938633131656434 Jan 23 18:27:45.081000 audit: BPF prog-id=84 op=UNLOAD Jan 23 18:27:45.081000 audit[2557]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2521 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663633335643335303135643435343066373865653938633131656434 Jan 23 18:27:45.081000 audit: BPF prog-id=85 op=LOAD Jan 23 18:27:45.081000 audit[2557]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=2521 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663633335643335303135643435343066373865653938633131656434 Jan 23 18:27:45.081000 audit: BPF prog-id=86 op=LOAD Jan 23 18:27:45.081000 audit[2557]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=2521 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663633335643335303135643435343066373865653938633131656434 Jan 23 18:27:45.081000 audit: BPF prog-id=86 op=UNLOAD Jan 23 18:27:45.081000 audit[2557]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2521 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663633335643335303135643435343066373865653938633131656434 Jan 23 18:27:45.081000 audit: BPF prog-id=85 op=UNLOAD Jan 23 18:27:45.081000 audit[2557]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2521 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663633335643335303135643435343066373865653938633131656434 Jan 23 18:27:45.081000 audit: BPF prog-id=87 op=LOAD Jan 23 18:27:45.081000 audit[2557]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=2521 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663633335643335303135643435343066373865653938633131656434 Jan 23 18:27:45.097616 systemd[1]: Started cri-containerd-b922349852f1c7a5d2230c36b6cd6fd6a28c23a8df5a57af2bfd77acd3848cbb.scope - libcontainer container b922349852f1c7a5d2230c36b6cd6fd6a28c23a8df5a57af2bfd77acd3848cbb. Jan 23 18:27:45.117652 systemd[1]: Started cri-containerd-f221b109ac31ce4c4f01edbef9fccff7528cd0628c640440f96a71aeda174b83.scope - libcontainer container f221b109ac31ce4c4f01edbef9fccff7528cd0628c640440f96a71aeda174b83. Jan 23 18:27:45.142000 audit: BPF prog-id=88 op=LOAD Jan 23 18:27:45.147000 audit: BPF prog-id=89 op=LOAD Jan 23 18:27:45.147000 audit[2554]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2510 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239323233343938353266316337613564323233306333366236636436 Jan 23 18:27:45.148000 audit: BPF prog-id=89 op=UNLOAD Jan 23 18:27:45.148000 audit[2554]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2510 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239323233343938353266316337613564323233306333366236636436 Jan 23 18:27:45.148000 audit: BPF prog-id=90 op=LOAD Jan 23 18:27:45.148000 audit[2554]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2510 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239323233343938353266316337613564323233306333366236636436 Jan 23 18:27:45.149000 audit: BPF prog-id=91 op=LOAD Jan 23 18:27:45.149000 audit[2554]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2510 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239323233343938353266316337613564323233306333366236636436 Jan 23 18:27:45.150000 audit: BPF prog-id=91 op=UNLOAD Jan 23 18:27:45.150000 audit[2554]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2510 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239323233343938353266316337613564323233306333366236636436 Jan 23 18:27:45.151000 audit: BPF prog-id=90 op=UNLOAD Jan 23 18:27:45.151000 audit[2554]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2510 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239323233343938353266316337613564323233306333366236636436 Jan 23 18:27:45.153000 audit: BPF prog-id=92 op=LOAD Jan 23 18:27:45.154000 audit: BPF prog-id=93 op=LOAD Jan 23 18:27:45.154000 audit[2567]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2526 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632323162313039616333316365346334663031656462656639666363 Jan 23 18:27:45.154000 audit: BPF prog-id=93 op=UNLOAD Jan 23 18:27:45.154000 audit[2567]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2526 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632323162313039616333316365346334663031656462656639666363 Jan 23 18:27:45.154000 audit: BPF prog-id=94 op=LOAD Jan 23 18:27:45.154000 audit[2567]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2526 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632323162313039616333316365346334663031656462656639666363 Jan 23 18:27:45.154000 audit: BPF prog-id=95 op=LOAD Jan 23 18:27:45.154000 audit[2567]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2526 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632323162313039616333316365346334663031656462656639666363 Jan 23 18:27:45.154000 audit: BPF prog-id=95 op=UNLOAD Jan 23 18:27:45.154000 audit[2567]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2526 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632323162313039616333316365346334663031656462656639666363 Jan 23 18:27:45.154000 audit: BPF prog-id=94 op=UNLOAD Jan 23 18:27:45.154000 audit[2567]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2526 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632323162313039616333316365346334663031656462656639666363 Jan 23 18:27:45.154000 audit: BPF prog-id=96 op=LOAD Jan 23 18:27:45.154000 audit[2567]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2526 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632323162313039616333316365346334663031656462656639666363 Jan 23 18:27:45.153000 audit: BPF prog-id=97 op=LOAD Jan 23 18:27:45.153000 audit[2554]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2510 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239323233343938353266316337613564323233306333366236636436 Jan 23 18:27:45.166228 containerd[1632]: time="2026-01-23T18:27:45.166097138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"fcc35d35015d4540f78ee98c11ed4e68a009fac6efd685d3cd62d2c498afc4f5\"" Jan 23 18:27:45.168616 kubelet[2462]: E0123 18:27:45.167853 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:45.178064 containerd[1632]: time="2026-01-23T18:27:45.177952309Z" level=info msg="CreateContainer within sandbox \"fcc35d35015d4540f78ee98c11ed4e68a009fac6efd685d3cd62d2c498afc4f5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 18:27:45.191508 containerd[1632]: time="2026-01-23T18:27:45.191453500Z" level=info msg="Container 524abba8cd4436839036ab50c2771839a856f2959b21fce2f54a1632614f7f50: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:27:45.199464 kubelet[2462]: I0123 18:27:45.199362 2462 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:27:45.199923 kubelet[2462]: E0123 18:27:45.199897 2462 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.29:6443/api/v1/nodes\": dial tcp 10.0.0.29:6443: connect: connection refused" node="localhost" Jan 23 18:27:45.201575 kubelet[2462]: E0123 18:27:45.201489 2462 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.29:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.29:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d6f8959067b55 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 18:27:44.321272661 +0000 UTC m=+0.493040415,LastTimestamp:2026-01-23 18:27:44.321272661 +0000 UTC m=+0.493040415,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 18:27:45.204472 containerd[1632]: time="2026-01-23T18:27:45.204306858Z" level=info msg="CreateContainer within sandbox \"fcc35d35015d4540f78ee98c11ed4e68a009fac6efd685d3cd62d2c498afc4f5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"524abba8cd4436839036ab50c2771839a856f2959b21fce2f54a1632614f7f50\"" Jan 23 18:27:45.205829 containerd[1632]: time="2026-01-23T18:27:45.205728838Z" level=info msg="StartContainer for \"524abba8cd4436839036ab50c2771839a856f2959b21fce2f54a1632614f7f50\"" Jan 23 18:27:45.207189 containerd[1632]: time="2026-01-23T18:27:45.207081354Z" level=info msg="connecting to shim 524abba8cd4436839036ab50c2771839a856f2959b21fce2f54a1632614f7f50" address="unix:///run/containerd/s/e58a70be62665d265f96f7f6745765cfcd8fd540bfb1ca3d3888db1e4e12291a" protocol=ttrpc version=3 Jan 23 18:27:45.240718 containerd[1632]: time="2026-01-23T18:27:45.240445552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b922349852f1c7a5d2230c36b6cd6fd6a28c23a8df5a57af2bfd77acd3848cbb\"" Jan 23 18:27:45.241028 kubelet[2462]: E0123 18:27:45.240913 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:45.245949 containerd[1632]: time="2026-01-23T18:27:45.245845719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:730a986346c3d53ccd4bbe116df06297,Namespace:kube-system,Attempt:0,} returns sandbox id \"f221b109ac31ce4c4f01edbef9fccff7528cd0628c640440f96a71aeda174b83\"" Jan 23 18:27:45.246820 containerd[1632]: time="2026-01-23T18:27:45.246703615Z" level=info msg="CreateContainer within sandbox \"b922349852f1c7a5d2230c36b6cd6fd6a28c23a8df5a57af2bfd77acd3848cbb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 18:27:45.246882 kubelet[2462]: E0123 18:27:45.246843 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:45.250710 systemd[1]: Started cri-containerd-524abba8cd4436839036ab50c2771839a856f2959b21fce2f54a1632614f7f50.scope - libcontainer container 524abba8cd4436839036ab50c2771839a856f2959b21fce2f54a1632614f7f50. Jan 23 18:27:45.253523 containerd[1632]: time="2026-01-23T18:27:45.253184780Z" level=info msg="CreateContainer within sandbox \"f221b109ac31ce4c4f01edbef9fccff7528cd0628c640440f96a71aeda174b83\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 18:27:45.260257 containerd[1632]: time="2026-01-23T18:27:45.260216690Z" level=info msg="Container dbcb1c2480876d1405765079f64763afbc90ca01ae94244ee27dcef090e04dd3: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:27:45.273761 containerd[1632]: time="2026-01-23T18:27:45.273676773Z" level=info msg="Container 0909fe55a648fa62bb239237ae5213d49c51c2438560bc2e5b155f9bd6fbd4d6: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:27:45.276285 containerd[1632]: time="2026-01-23T18:27:45.276169749Z" level=info msg="CreateContainer within sandbox \"b922349852f1c7a5d2230c36b6cd6fd6a28c23a8df5a57af2bfd77acd3848cbb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dbcb1c2480876d1405765079f64763afbc90ca01ae94244ee27dcef090e04dd3\"" Jan 23 18:27:45.276000 audit: BPF prog-id=98 op=LOAD Jan 23 18:27:45.276875 containerd[1632]: time="2026-01-23T18:27:45.276741207Z" level=info msg="StartContainer for \"dbcb1c2480876d1405765079f64763afbc90ca01ae94244ee27dcef090e04dd3\"" Jan 23 18:27:45.277000 audit: BPF prog-id=99 op=LOAD Jan 23 18:27:45.277000 audit[2621]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2521 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.277000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532346162626138636434343336383339303336616235306332373731 Jan 23 18:27:45.277000 audit: BPF prog-id=99 op=UNLOAD Jan 23 18:27:45.277000 audit[2621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2521 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.277000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532346162626138636434343336383339303336616235306332373731 Jan 23 18:27:45.277000 audit: BPF prog-id=100 op=LOAD Jan 23 18:27:45.277000 audit[2621]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2521 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.277000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532346162626138636434343336383339303336616235306332373731 Jan 23 18:27:45.277000 audit: BPF prog-id=101 op=LOAD Jan 23 18:27:45.277000 audit[2621]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2521 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.277000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532346162626138636434343336383339303336616235306332373731 Jan 23 18:27:45.277000 audit: BPF prog-id=101 op=UNLOAD Jan 23 18:27:45.277000 audit[2621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2521 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.277000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532346162626138636434343336383339303336616235306332373731 Jan 23 18:27:45.277000 audit: BPF prog-id=100 op=UNLOAD Jan 23 18:27:45.277000 audit[2621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2521 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.277000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532346162626138636434343336383339303336616235306332373731 Jan 23 18:27:45.278000 audit: BPF prog-id=102 op=LOAD Jan 23 18:27:45.278000 audit[2621]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2521 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.278000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532346162626138636434343336383339303336616235306332373731 Jan 23 18:27:45.279310 containerd[1632]: time="2026-01-23T18:27:45.279287195Z" level=info msg="connecting to shim dbcb1c2480876d1405765079f64763afbc90ca01ae94244ee27dcef090e04dd3" address="unix:///run/containerd/s/d248ea88de8b6708d9e070886e5bdb66941f5c11c864103ebe66e3c3abf5a02f" protocol=ttrpc version=3 Jan 23 18:27:45.292721 containerd[1632]: time="2026-01-23T18:27:45.292668344Z" level=info msg="CreateContainer within sandbox \"f221b109ac31ce4c4f01edbef9fccff7528cd0628c640440f96a71aeda174b83\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0909fe55a648fa62bb239237ae5213d49c51c2438560bc2e5b155f9bd6fbd4d6\"" Jan 23 18:27:45.294447 containerd[1632]: time="2026-01-23T18:27:45.294247928Z" level=info msg="StartContainer for \"0909fe55a648fa62bb239237ae5213d49c51c2438560bc2e5b155f9bd6fbd4d6\"" Jan 23 18:27:45.296935 containerd[1632]: time="2026-01-23T18:27:45.296844684Z" level=info msg="connecting to shim 0909fe55a648fa62bb239237ae5213d49c51c2438560bc2e5b155f9bd6fbd4d6" address="unix:///run/containerd/s/406e283d8e952ca1d8658f3c18c6f5f46ce41ef29354ca389b8923ea3e02b673" protocol=ttrpc version=3 Jan 23 18:27:45.315963 systemd[1]: Started cri-containerd-dbcb1c2480876d1405765079f64763afbc90ca01ae94244ee27dcef090e04dd3.scope - libcontainer container dbcb1c2480876d1405765079f64763afbc90ca01ae94244ee27dcef090e04dd3. Jan 23 18:27:45.331849 systemd[1]: Started cri-containerd-0909fe55a648fa62bb239237ae5213d49c51c2438560bc2e5b155f9bd6fbd4d6.scope - libcontainer container 0909fe55a648fa62bb239237ae5213d49c51c2438560bc2e5b155f9bd6fbd4d6. Jan 23 18:27:45.346000 audit: BPF prog-id=103 op=LOAD Jan 23 18:27:45.350000 audit: BPF prog-id=104 op=LOAD Jan 23 18:27:45.350000 audit[2659]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2510 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.350000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462636231633234383038373664313430353736353037396636343736 Jan 23 18:27:45.350000 audit: BPF prog-id=104 op=UNLOAD Jan 23 18:27:45.350000 audit[2659]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2510 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.350000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462636231633234383038373664313430353736353037396636343736 Jan 23 18:27:45.350000 audit: BPF prog-id=105 op=LOAD Jan 23 18:27:45.350000 audit[2659]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2510 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.350000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462636231633234383038373664313430353736353037396636343736 Jan 23 18:27:45.350000 audit: BPF prog-id=106 op=LOAD Jan 23 18:27:45.350000 audit[2659]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2510 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.350000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462636231633234383038373664313430353736353037396636343736 Jan 23 18:27:45.350000 audit: BPF prog-id=106 op=UNLOAD Jan 23 18:27:45.350000 audit[2659]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2510 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.350000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462636231633234383038373664313430353736353037396636343736 Jan 23 18:27:45.350000 audit: BPF prog-id=105 op=UNLOAD Jan 23 18:27:45.350000 audit[2659]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2510 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.350000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462636231633234383038373664313430353736353037396636343736 Jan 23 18:27:45.350000 audit: BPF prog-id=107 op=LOAD Jan 23 18:27:45.350000 audit[2659]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2510 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.350000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462636231633234383038373664313430353736353037396636343736 Jan 23 18:27:45.358719 containerd[1632]: time="2026-01-23T18:27:45.358690801Z" level=info msg="StartContainer for \"524abba8cd4436839036ab50c2771839a856f2959b21fce2f54a1632614f7f50\" returns successfully" Jan 23 18:27:45.367000 audit: BPF prog-id=108 op=LOAD Jan 23 18:27:45.368000 audit: BPF prog-id=109 op=LOAD Jan 23 18:27:45.368000 audit[2671]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=2526 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039303966653535613634386661363262623233393233376165353231 Jan 23 18:27:45.368000 audit: BPF prog-id=109 op=UNLOAD Jan 23 18:27:45.368000 audit[2671]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2526 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039303966653535613634386661363262623233393233376165353231 Jan 23 18:27:45.369000 audit: BPF prog-id=110 op=LOAD Jan 23 18:27:45.369000 audit[2671]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=2526 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039303966653535613634386661363262623233393233376165353231 Jan 23 18:27:45.369000 audit: BPF prog-id=111 op=LOAD Jan 23 18:27:45.369000 audit[2671]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=2526 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039303966653535613634386661363262623233393233376165353231 Jan 23 18:27:45.369000 audit: BPF prog-id=111 op=UNLOAD Jan 23 18:27:45.369000 audit[2671]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2526 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039303966653535613634386661363262623233393233376165353231 Jan 23 18:27:45.369000 audit: BPF prog-id=110 op=UNLOAD Jan 23 18:27:45.369000 audit[2671]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2526 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039303966653535613634386661363262623233393233376165353231 Jan 23 18:27:45.369000 audit: BPF prog-id=112 op=LOAD Jan 23 18:27:45.369000 audit[2671]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=2526 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:45.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039303966653535613634386661363262623233393233376165353231 Jan 23 18:27:45.411699 kubelet[2462]: E0123 18:27:45.411658 2462 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 18:27:45.416432 containerd[1632]: time="2026-01-23T18:27:45.416305982Z" level=info msg="StartContainer for \"dbcb1c2480876d1405765079f64763afbc90ca01ae94244ee27dcef090e04dd3\" returns successfully" Jan 23 18:27:45.427315 kubelet[2462]: E0123 18:27:45.427238 2462 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 18:27:45.434834 containerd[1632]: time="2026-01-23T18:27:45.434698280Z" level=info msg="StartContainer for \"0909fe55a648fa62bb239237ae5213d49c51c2438560bc2e5b155f9bd6fbd4d6\" returns successfully" Jan 23 18:27:45.441618 kubelet[2462]: E0123 18:27:45.441595 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:27:45.442734 kubelet[2462]: E0123 18:27:45.442657 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:45.444596 kubelet[2462]: E0123 18:27:45.444579 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:27:45.444790 kubelet[2462]: E0123 18:27:45.444773 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:45.449460 kubelet[2462]: E0123 18:27:45.449443 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:27:45.449862 kubelet[2462]: E0123 18:27:45.449848 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:46.119208 kubelet[2462]: I0123 18:27:46.117740 2462 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:27:47.020469 kubelet[2462]: E0123 18:27:47.019758 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:27:47.020469 kubelet[2462]: E0123 18:27:47.020702 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:47.032762 kubelet[2462]: E0123 18:27:47.032734 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:27:47.032929 kubelet[2462]: E0123 18:27:47.032856 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:48.101677 kubelet[2462]: E0123 18:27:48.072623 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:27:48.101677 kubelet[2462]: E0123 18:27:48.073142 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:48.770766 kubelet[2462]: E0123 18:27:48.770561 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:27:48.771173 kubelet[2462]: E0123 18:27:48.771023 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:49.735838 kubelet[2462]: E0123 18:27:49.735110 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:27:49.739465 kubelet[2462]: E0123 18:27:49.737274 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:50.117988 kubelet[2462]: E0123 18:27:50.117851 2462 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 23 18:27:50.202481 kubelet[2462]: I0123 18:27:50.201937 2462 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 18:27:50.234051 kubelet[2462]: I0123 18:27:50.233845 2462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 18:27:50.245696 kubelet[2462]: E0123 18:27:50.245518 2462 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 23 18:27:50.245696 kubelet[2462]: I0123 18:27:50.245587 2462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 18:27:50.249480 kubelet[2462]: E0123 18:27:50.249360 2462 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 23 18:27:50.249480 kubelet[2462]: I0123 18:27:50.249481 2462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 18:27:50.253534 kubelet[2462]: E0123 18:27:50.253366 2462 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 23 18:27:50.337656 kubelet[2462]: I0123 18:27:50.337564 2462 apiserver.go:52] "Watching apiserver" Jan 23 18:27:50.526603 kubelet[2462]: I0123 18:27:50.523905 2462 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 18:27:52.836149 systemd[1]: Reload requested from client PID 2751 ('systemctl') (unit session-8.scope)... Jan 23 18:27:52.836204 systemd[1]: Reloading... Jan 23 18:27:53.008627 zram_generator::config[2796]: No configuration found. Jan 23 18:27:53.669279 systemd[1]: Reloading finished in 832 ms. Jan 23 18:27:53.700113 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:27:53.700599 kubelet[2462]: I0123 18:27:53.700161 2462 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:27:53.717208 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 18:27:53.717768 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:27:53.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:27:53.717916 systemd[1]: kubelet.service: Consumed 3.121s CPU time, 131.5M memory peak. Jan 23 18:27:53.719911 kernel: kauditd_printk_skb: 158 callbacks suppressed Jan 23 18:27:53.719979 kernel: audit: type=1131 audit(1769192873.717:397): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:27:53.720796 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:27:53.719000 audit: BPF prog-id=113 op=LOAD Jan 23 18:27:53.730193 kernel: audit: type=1334 audit(1769192873.719:398): prog-id=113 op=LOAD Jan 23 18:27:53.719000 audit: BPF prog-id=66 op=UNLOAD Jan 23 18:27:53.733182 kernel: audit: type=1334 audit(1769192873.719:399): prog-id=66 op=UNLOAD Jan 23 18:27:53.733252 kernel: audit: type=1334 audit(1769192873.719:400): prog-id=114 op=LOAD Jan 23 18:27:53.719000 audit: BPF prog-id=114 op=LOAD Jan 23 18:27:53.736482 kernel: audit: type=1334 audit(1769192873.719:401): prog-id=115 op=LOAD Jan 23 18:27:53.719000 audit: BPF prog-id=115 op=LOAD Jan 23 18:27:53.719000 audit: BPF prog-id=67 op=UNLOAD Jan 23 18:27:53.742330 kernel: audit: type=1334 audit(1769192873.719:402): prog-id=67 op=UNLOAD Jan 23 18:27:53.719000 audit: BPF prog-id=68 op=UNLOAD Jan 23 18:27:53.763001 kernel: audit: type=1334 audit(1769192873.719:403): prog-id=68 op=UNLOAD Jan 23 18:27:53.763159 kernel: audit: type=1334 audit(1769192873.719:404): prog-id=116 op=LOAD Jan 23 18:27:53.719000 audit: BPF prog-id=116 op=LOAD Jan 23 18:27:53.719000 audit: BPF prog-id=80 op=UNLOAD Jan 23 18:27:53.766003 kernel: audit: type=1334 audit(1769192873.719:405): prog-id=80 op=UNLOAD Jan 23 18:27:53.766059 kernel: audit: type=1334 audit(1769192873.719:406): prog-id=117 op=LOAD Jan 23 18:27:53.719000 audit: BPF prog-id=117 op=LOAD Jan 23 18:27:53.719000 audit: BPF prog-id=118 op=LOAD Jan 23 18:27:53.719000 audit: BPF prog-id=81 op=UNLOAD Jan 23 18:27:53.719000 audit: BPF prog-id=82 op=UNLOAD Jan 23 18:27:53.726000 audit: BPF prog-id=119 op=LOAD Jan 23 18:27:53.726000 audit: BPF prog-id=71 op=UNLOAD Jan 23 18:27:53.759000 audit: BPF prog-id=120 op=LOAD Jan 23 18:27:53.759000 audit: BPF prog-id=72 op=UNLOAD Jan 23 18:27:53.759000 audit: BPF prog-id=121 op=LOAD Jan 23 18:27:53.759000 audit: BPF prog-id=122 op=LOAD Jan 23 18:27:53.759000 audit: BPF prog-id=73 op=UNLOAD Jan 23 18:27:53.759000 audit: BPF prog-id=74 op=UNLOAD Jan 23 18:27:53.762000 audit: BPF prog-id=123 op=LOAD Jan 23 18:27:53.762000 audit: BPF prog-id=76 op=UNLOAD Jan 23 18:27:53.762000 audit: BPF prog-id=124 op=LOAD Jan 23 18:27:53.762000 audit: BPF prog-id=125 op=LOAD Jan 23 18:27:53.762000 audit: BPF prog-id=77 op=UNLOAD Jan 23 18:27:53.762000 audit: BPF prog-id=78 op=UNLOAD Jan 23 18:27:53.765000 audit: BPF prog-id=126 op=LOAD Jan 23 18:27:53.765000 audit: BPF prog-id=63 op=UNLOAD Jan 23 18:27:53.765000 audit: BPF prog-id=127 op=LOAD Jan 23 18:27:53.765000 audit: BPF prog-id=128 op=LOAD Jan 23 18:27:53.765000 audit: BPF prog-id=64 op=UNLOAD Jan 23 18:27:53.765000 audit: BPF prog-id=65 op=UNLOAD Jan 23 18:27:53.765000 audit: BPF prog-id=129 op=LOAD Jan 23 18:27:53.765000 audit: BPF prog-id=75 op=UNLOAD Jan 23 18:27:53.768000 audit: BPF prog-id=130 op=LOAD Jan 23 18:27:53.768000 audit: BPF prog-id=79 op=UNLOAD Jan 23 18:27:53.768000 audit: BPF prog-id=131 op=LOAD Jan 23 18:27:53.769000 audit: BPF prog-id=132 op=LOAD Jan 23 18:27:53.769000 audit: BPF prog-id=69 op=UNLOAD Jan 23 18:27:53.769000 audit: BPF prog-id=70 op=UNLOAD Jan 23 18:27:54.002279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:27:54.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:27:54.019264 (kubelet)[2842]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:27:54.168148 kubelet[2842]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:27:54.168148 kubelet[2842]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:27:54.168148 kubelet[2842]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:27:54.168148 kubelet[2842]: I0123 18:27:54.168251 2842 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:27:54.183142 kubelet[2842]: I0123 18:27:54.183050 2842 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 18:27:54.183142 kubelet[2842]: I0123 18:27:54.183136 2842 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:27:54.183613 kubelet[2842]: I0123 18:27:54.183499 2842 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 18:27:54.186280 kubelet[2842]: I0123 18:27:54.186138 2842 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 18:27:54.193334 kubelet[2842]: I0123 18:27:54.192713 2842 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:27:54.200076 kubelet[2842]: I0123 18:27:54.199988 2842 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:27:54.209779 kubelet[2842]: I0123 18:27:54.208852 2842 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 18:27:54.209779 kubelet[2842]: I0123 18:27:54.209281 2842 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:27:54.210016 kubelet[2842]: I0123 18:27:54.209311 2842 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:27:54.210016 kubelet[2842]: I0123 18:27:54.209987 2842 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:27:54.210016 kubelet[2842]: I0123 18:27:54.210002 2842 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 18:27:54.210723 kubelet[2842]: I0123 18:27:54.210113 2842 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:27:54.210723 kubelet[2842]: I0123 18:27:54.210446 2842 kubelet.go:480] "Attempting to sync node with API server" Jan 23 18:27:54.210723 kubelet[2842]: I0123 18:27:54.210468 2842 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:27:54.210723 kubelet[2842]: I0123 18:27:54.210499 2842 kubelet.go:386] "Adding apiserver pod source" Jan 23 18:27:54.210723 kubelet[2842]: I0123 18:27:54.210591 2842 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:27:54.234045 kubelet[2842]: I0123 18:27:54.231771 2842 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 18:27:54.234045 kubelet[2842]: I0123 18:27:54.233615 2842 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 18:27:54.250477 kubelet[2842]: I0123 18:27:54.249504 2842 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 18:27:54.250477 kubelet[2842]: I0123 18:27:54.249626 2842 server.go:1289] "Started kubelet" Jan 23 18:27:54.250477 kubelet[2842]: I0123 18:27:54.249856 2842 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:27:54.250477 kubelet[2842]: I0123 18:27:54.250107 2842 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:27:54.252517 kubelet[2842]: I0123 18:27:54.252162 2842 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:27:54.293663 kubelet[2842]: I0123 18:27:54.293279 2842 server.go:317] "Adding debug handlers to kubelet server" Jan 23 18:27:54.297132 kubelet[2842]: E0123 18:27:54.297064 2842 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:27:54.298279 kubelet[2842]: I0123 18:27:54.298038 2842 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:27:54.298356 kubelet[2842]: I0123 18:27:54.298291 2842 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:27:54.300628 kubelet[2842]: I0123 18:27:54.300537 2842 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 18:27:54.301643 kubelet[2842]: I0123 18:27:54.301510 2842 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 18:27:54.303156 kubelet[2842]: I0123 18:27:54.302930 2842 reconciler.go:26] "Reconciler: start to sync state" Jan 23 18:27:54.319095 kubelet[2842]: I0123 18:27:54.318641 2842 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:27:54.353682 kubelet[2842]: I0123 18:27:54.352619 2842 factory.go:223] Registration of the containerd container factory successfully Jan 23 18:27:54.353682 kubelet[2842]: I0123 18:27:54.352782 2842 factory.go:223] Registration of the systemd container factory successfully Jan 23 18:27:54.404846 kubelet[2842]: I0123 18:27:54.404642 2842 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 18:27:54.409482 kubelet[2842]: I0123 18:27:54.409314 2842 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 18:27:54.409907 kubelet[2842]: I0123 18:27:54.409791 2842 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 18:27:54.410269 kubelet[2842]: I0123 18:27:54.410251 2842 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:27:54.410624 kubelet[2842]: I0123 18:27:54.410607 2842 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 18:27:54.410755 kubelet[2842]: E0123 18:27:54.410732 2842 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:27:54.494119 kubelet[2842]: I0123 18:27:54.488206 2842 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:27:54.505630 kubelet[2842]: I0123 18:27:54.504157 2842 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:27:54.505630 kubelet[2842]: I0123 18:27:54.504782 2842 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:27:54.507075 kubelet[2842]: I0123 18:27:54.507024 2842 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 18:27:54.507421 kubelet[2842]: I0123 18:27:54.507116 2842 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 18:27:54.507546 kubelet[2842]: I0123 18:27:54.507377 2842 policy_none.go:49] "None policy: Start" Jan 23 18:27:54.507647 kubelet[2842]: I0123 18:27:54.507614 2842 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 18:27:54.507647 kubelet[2842]: I0123 18:27:54.507646 2842 state_mem.go:35] "Initializing new in-memory state store" Jan 23 18:27:54.507946 kubelet[2842]: I0123 18:27:54.507875 2842 state_mem.go:75] "Updated machine memory state" Jan 23 18:27:54.511192 kubelet[2842]: E0123 18:27:54.511070 2842 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 18:27:54.578722 kubelet[2842]: E0123 18:27:54.578667 2842 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 18:27:54.579553 kubelet[2842]: I0123 18:27:54.579499 2842 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:27:54.582438 kubelet[2842]: I0123 18:27:54.579561 2842 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:27:54.584949 kubelet[2842]: I0123 18:27:54.584832 2842 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:27:54.619081 kubelet[2842]: E0123 18:27:54.616092 2842 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:27:54.712677 kubelet[2842]: I0123 18:27:54.712638 2842 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 18:27:54.713033 kubelet[2842]: I0123 18:27:54.712654 2842 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 18:27:54.713782 kubelet[2842]: I0123 18:27:54.713712 2842 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 18:27:54.744187 kubelet[2842]: I0123 18:27:54.743841 2842 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:27:54.761461 kubelet[2842]: I0123 18:27:54.761211 2842 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 23 18:27:54.761461 kubelet[2842]: I0123 18:27:54.761320 2842 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 18:27:54.824671 kubelet[2842]: I0123 18:27:54.823761 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/730a986346c3d53ccd4bbe116df06297-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"730a986346c3d53ccd4bbe116df06297\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:27:54.824671 kubelet[2842]: I0123 18:27:54.824135 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:27:54.824671 kubelet[2842]: I0123 18:27:54.824249 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:27:54.824671 kubelet[2842]: I0123 18:27:54.824269 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:27:54.824671 kubelet[2842]: I0123 18:27:54.824321 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 23 18:27:54.826036 kubelet[2842]: I0123 18:27:54.824335 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/730a986346c3d53ccd4bbe116df06297-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"730a986346c3d53ccd4bbe116df06297\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:27:54.826036 kubelet[2842]: I0123 18:27:54.824472 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/730a986346c3d53ccd4bbe116df06297-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"730a986346c3d53ccd4bbe116df06297\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:27:54.826036 kubelet[2842]: I0123 18:27:54.824491 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:27:54.826036 kubelet[2842]: I0123 18:27:54.824511 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:27:55.020886 kubelet[2842]: E0123 18:27:55.020736 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:55.024847 kubelet[2842]: E0123 18:27:55.024808 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:55.026268 kubelet[2842]: E0123 18:27:55.025867 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:55.212883 kubelet[2842]: I0123 18:27:55.212277 2842 apiserver.go:52] "Watching apiserver" Jan 23 18:27:55.292168 kubelet[2842]: I0123 18:27:55.291725 2842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.291672414 podStartE2EDuration="1.291672414s" podCreationTimestamp="2026-01-23 18:27:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:27:55.291044166 +0000 UTC m=+1.252327180" watchObservedRunningTime="2026-01-23 18:27:55.291672414 +0000 UTC m=+1.252955397" Jan 23 18:27:55.302371 kubelet[2842]: I0123 18:27:55.302253 2842 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 18:27:55.331035 kubelet[2842]: I0123 18:27:55.330908 2842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.330886564 podStartE2EDuration="1.330886564s" podCreationTimestamp="2026-01-23 18:27:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:27:55.311106741 +0000 UTC m=+1.272389724" watchObservedRunningTime="2026-01-23 18:27:55.330886564 +0000 UTC m=+1.292169547" Jan 23 18:27:55.348514 kubelet[2842]: I0123 18:27:55.348367 2842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.348346268 podStartE2EDuration="1.348346268s" podCreationTimestamp="2026-01-23 18:27:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:27:55.331038582 +0000 UTC m=+1.292321575" watchObservedRunningTime="2026-01-23 18:27:55.348346268 +0000 UTC m=+1.309629251" Jan 23 18:27:55.458707 kubelet[2842]: E0123 18:27:55.458124 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:55.459677 kubelet[2842]: I0123 18:27:55.458859 2842 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 18:27:55.460073 kubelet[2842]: E0123 18:27:55.459313 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:55.474093 kubelet[2842]: E0123 18:27:55.474047 2842 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 23 18:27:55.475110 kubelet[2842]: E0123 18:27:55.475011 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:56.459991 kubelet[2842]: E0123 18:27:56.459867 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:56.459991 kubelet[2842]: E0123 18:27:56.459990 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:56.461636 kubelet[2842]: E0123 18:27:56.461520 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:57.464645 kubelet[2842]: E0123 18:27:57.464561 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:57.663822 kubelet[2842]: I0123 18:27:57.663729 2842 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 18:27:57.664294 containerd[1632]: time="2026-01-23T18:27:57.664234055Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 18:27:57.665511 kubelet[2842]: I0123 18:27:57.664767 2842 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 18:27:58.475378 kubelet[2842]: E0123 18:27:58.474979 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:59.023831 systemd[1]: Created slice kubepods-besteffort-pod221669f6_6cbb_4937_bd95_ba09c9a11fcb.slice - libcontainer container kubepods-besteffort-pod221669f6_6cbb_4937_bd95_ba09c9a11fcb.slice. Jan 23 18:27:59.141314 systemd[1]: Created slice kubepods-besteffort-pod728624a0_5e47_4238_a6a8_1873a993432d.slice - libcontainer container kubepods-besteffort-pod728624a0_5e47_4238_a6a8_1873a993432d.slice. Jan 23 18:27:59.183994 kubelet[2842]: I0123 18:27:59.183915 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/221669f6-6cbb-4937-bd95-ba09c9a11fcb-kube-proxy\") pod \"kube-proxy-pd6td\" (UID: \"221669f6-6cbb-4937-bd95-ba09c9a11fcb\") " pod="kube-system/kube-proxy-pd6td" Jan 23 18:27:59.183994 kubelet[2842]: I0123 18:27:59.183991 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/221669f6-6cbb-4937-bd95-ba09c9a11fcb-xtables-lock\") pod \"kube-proxy-pd6td\" (UID: \"221669f6-6cbb-4937-bd95-ba09c9a11fcb\") " pod="kube-system/kube-proxy-pd6td" Jan 23 18:27:59.184231 kubelet[2842]: I0123 18:27:59.184014 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/221669f6-6cbb-4937-bd95-ba09c9a11fcb-lib-modules\") pod \"kube-proxy-pd6td\" (UID: \"221669f6-6cbb-4937-bd95-ba09c9a11fcb\") " pod="kube-system/kube-proxy-pd6td" Jan 23 18:27:59.184231 kubelet[2842]: I0123 18:27:59.184031 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g78bm\" (UniqueName: \"kubernetes.io/projected/221669f6-6cbb-4937-bd95-ba09c9a11fcb-kube-api-access-g78bm\") pod \"kube-proxy-pd6td\" (UID: \"221669f6-6cbb-4937-bd95-ba09c9a11fcb\") " pod="kube-system/kube-proxy-pd6td" Jan 23 18:27:59.285925 kubelet[2842]: I0123 18:27:59.285021 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/728624a0-5e47-4238-a6a8-1873a993432d-var-lib-calico\") pod \"tigera-operator-7dcd859c48-8t99c\" (UID: \"728624a0-5e47-4238-a6a8-1873a993432d\") " pod="tigera-operator/tigera-operator-7dcd859c48-8t99c" Jan 23 18:27:59.285925 kubelet[2842]: I0123 18:27:59.285133 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnwmv\" (UniqueName: \"kubernetes.io/projected/728624a0-5e47-4238-a6a8-1873a993432d-kube-api-access-qnwmv\") pod \"tigera-operator-7dcd859c48-8t99c\" (UID: \"728624a0-5e47-4238-a6a8-1873a993432d\") " pod="tigera-operator/tigera-operator-7dcd859c48-8t99c" Jan 23 18:27:59.336254 kubelet[2842]: E0123 18:27:59.336124 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:59.337082 containerd[1632]: time="2026-01-23T18:27:59.336969698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pd6td,Uid:221669f6-6cbb-4937-bd95-ba09c9a11fcb,Namespace:kube-system,Attempt:0,}" Jan 23 18:27:59.416785 containerd[1632]: time="2026-01-23T18:27:59.416655290Z" level=info msg="connecting to shim e0bb7d7d41e78f3bc8beead9d0691b9e68d392dc848d2e0c854e1140fe6e57b3" address="unix:///run/containerd/s/990a37d9459b5785923f3778c3cee7d49cceebeb0f37caf00a8571b223e7a341" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:27:59.450699 containerd[1632]: time="2026-01-23T18:27:59.450574226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-8t99c,Uid:728624a0-5e47-4238-a6a8-1873a993432d,Namespace:tigera-operator,Attempt:0,}" Jan 23 18:27:59.499937 containerd[1632]: time="2026-01-23T18:27:59.499728183Z" level=info msg="connecting to shim 19ce1afc582a186d082aebd215be62bcbb2f53b85bbdef2abadb6be240493776" address="unix:///run/containerd/s/aec75b0478873d585b2066928f1ec21a44d57b6a10a493bf15a09d5a594bb872" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:27:59.514807 systemd[1]: Started cri-containerd-e0bb7d7d41e78f3bc8beead9d0691b9e68d392dc848d2e0c854e1140fe6e57b3.scope - libcontainer container e0bb7d7d41e78f3bc8beead9d0691b9e68d392dc848d2e0c854e1140fe6e57b3. Jan 23 18:27:59.542741 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 23 18:27:59.542907 kernel: audit: type=1334 audit(1769192879.536:439): prog-id=133 op=LOAD Jan 23 18:27:59.536000 audit: BPF prog-id=133 op=LOAD Jan 23 18:27:59.551000 audit: BPF prog-id=134 op=LOAD Jan 23 18:27:59.552661 systemd[1]: Started cri-containerd-19ce1afc582a186d082aebd215be62bcbb2f53b85bbdef2abadb6be240493776.scope - libcontainer container 19ce1afc582a186d082aebd215be62bcbb2f53b85bbdef2abadb6be240493776. Jan 23 18:27:59.557455 kernel: audit: type=1334 audit(1769192879.551:440): prog-id=134 op=LOAD Jan 23 18:27:59.551000 audit[2918]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2907 pid=2918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:59.570524 kernel: audit: type=1300 audit(1769192879.551:440): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2907 pid=2918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:59.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530626237643764343165373866336263386265656164396430363931 Jan 23 18:27:59.551000 audit: BPF prog-id=134 op=UNLOAD Jan 23 18:27:59.583232 kernel: audit: type=1327 audit(1769192879.551:440): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530626237643764343165373866336263386265656164396430363931 Jan 23 18:27:59.583296 kernel: audit: type=1334 audit(1769192879.551:441): prog-id=134 op=UNLOAD Jan 23 18:27:59.551000 audit[2918]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2907 pid=2918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:59.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530626237643764343165373866336263386265656164396430363931 Jan 23 18:27:59.606090 kernel: audit: type=1300 audit(1769192879.551:441): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2907 pid=2918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:59.606160 kernel: audit: type=1327 audit(1769192879.551:441): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530626237643764343165373866336263386265656164396430363931 Jan 23 18:27:59.606242 kernel: audit: type=1334 audit(1769192879.551:442): prog-id=135 op=LOAD Jan 23 18:27:59.551000 audit: BPF prog-id=135 op=LOAD Jan 23 18:27:59.551000 audit[2918]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2907 pid=2918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:59.617876 containerd[1632]: time="2026-01-23T18:27:59.617742976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pd6td,Uid:221669f6-6cbb-4937-bd95-ba09c9a11fcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0bb7d7d41e78f3bc8beead9d0691b9e68d392dc848d2e0c854e1140fe6e57b3\"" Jan 23 18:27:59.619796 kernel: audit: type=1300 audit(1769192879.551:442): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2907 pid=2918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:59.619844 kernel: audit: type=1327 audit(1769192879.551:442): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530626237643764343165373866336263386265656164396430363931 Jan 23 18:27:59.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530626237643764343165373866336263386265656164396430363931 Jan 23 18:27:59.621050 kubelet[2842]: E0123 18:27:59.620994 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:27:59.551000 audit: BPF prog-id=136 op=LOAD Jan 23 18:27:59.551000 audit[2918]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2907 pid=2918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:59.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530626237643764343165373866336263386265656164396430363931 Jan 23 18:27:59.551000 audit: BPF prog-id=136 op=UNLOAD Jan 23 18:27:59.551000 audit[2918]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2907 pid=2918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:59.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530626237643764343165373866336263386265656164396430363931 Jan 23 18:27:59.551000 audit: BPF prog-id=135 op=UNLOAD Jan 23 18:27:59.551000 audit[2918]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2907 pid=2918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:59.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530626237643764343165373866336263386265656164396430363931 Jan 23 18:27:59.551000 audit: BPF prog-id=137 op=LOAD Jan 23 18:27:59.551000 audit[2918]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2907 pid=2918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:59.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530626237643764343165373866336263386265656164396430363931 Jan 23 18:27:59.631000 audit: BPF prog-id=138 op=LOAD Jan 23 18:27:59.634141 containerd[1632]: time="2026-01-23T18:27:59.633936136Z" level=info msg="CreateContainer within sandbox \"e0bb7d7d41e78f3bc8beead9d0691b9e68d392dc848d2e0c854e1140fe6e57b3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 18:27:59.633000 audit: BPF prog-id=139 op=LOAD Jan 23 18:27:59.633000 audit[2950]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2933 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:59.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139636531616663353832613138366430383261656264323135626536 Jan 23 18:27:59.633000 audit: BPF prog-id=139 op=UNLOAD Jan 23 18:27:59.633000 audit[2950]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2933 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:59.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139636531616663353832613138366430383261656264323135626536 Jan 23 18:27:59.633000 audit: BPF prog-id=140 op=LOAD Jan 23 18:27:59.633000 audit[2950]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2933 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:59.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139636531616663353832613138366430383261656264323135626536 Jan 23 18:27:59.634000 audit: BPF prog-id=141 op=LOAD Jan 23 18:27:59.634000 audit[2950]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2933 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:59.634000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139636531616663353832613138366430383261656264323135626536 Jan 23 18:27:59.634000 audit: BPF prog-id=141 op=UNLOAD Jan 23 18:27:59.634000 audit[2950]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2933 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:59.634000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139636531616663353832613138366430383261656264323135626536 Jan 23 18:27:59.634000 audit: BPF prog-id=140 op=UNLOAD Jan 23 18:27:59.634000 audit[2950]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2933 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:59.634000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139636531616663353832613138366430383261656264323135626536 Jan 23 18:27:59.634000 audit: BPF prog-id=142 op=LOAD Jan 23 18:27:59.634000 audit[2950]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2933 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:59.634000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139636531616663353832613138366430383261656264323135626536 Jan 23 18:27:59.657335 containerd[1632]: time="2026-01-23T18:27:59.655493883Z" level=info msg="Container 919965e34b40a892f55da1bfae7a19be335f1852b070584d1ff974b601dba3f7: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:27:59.670537 containerd[1632]: time="2026-01-23T18:27:59.670340522Z" level=info msg="CreateContainer within sandbox \"e0bb7d7d41e78f3bc8beead9d0691b9e68d392dc848d2e0c854e1140fe6e57b3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"919965e34b40a892f55da1bfae7a19be335f1852b070584d1ff974b601dba3f7\"" Jan 23 18:27:59.671593 containerd[1632]: time="2026-01-23T18:27:59.671558469Z" level=info msg="StartContainer for \"919965e34b40a892f55da1bfae7a19be335f1852b070584d1ff974b601dba3f7\"" Jan 23 18:27:59.673723 containerd[1632]: time="2026-01-23T18:27:59.673656901Z" level=info msg="connecting to shim 919965e34b40a892f55da1bfae7a19be335f1852b070584d1ff974b601dba3f7" address="unix:///run/containerd/s/990a37d9459b5785923f3778c3cee7d49cceebeb0f37caf00a8571b223e7a341" protocol=ttrpc version=3 Jan 23 18:27:59.699901 containerd[1632]: time="2026-01-23T18:27:59.699859280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-8t99c,Uid:728624a0-5e47-4238-a6a8-1873a993432d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"19ce1afc582a186d082aebd215be62bcbb2f53b85bbdef2abadb6be240493776\"" Jan 23 18:27:59.709293 containerd[1632]: time="2026-01-23T18:27:59.709195793Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 18:27:59.710763 systemd[1]: Started cri-containerd-919965e34b40a892f55da1bfae7a19be335f1852b070584d1ff974b601dba3f7.scope - libcontainer container 919965e34b40a892f55da1bfae7a19be335f1852b070584d1ff974b601dba3f7. Jan 23 18:27:59.809000 audit: BPF prog-id=143 op=LOAD Jan 23 18:27:59.809000 audit[2987]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2907 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:59.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931393936356533346234306138393266353564613162666165376131 Jan 23 18:27:59.809000 audit: BPF prog-id=144 op=LOAD Jan 23 18:27:59.809000 audit[2987]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2907 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:59.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931393936356533346234306138393266353564613162666165376131 Jan 23 18:27:59.809000 audit: BPF prog-id=144 op=UNLOAD Jan 23 18:27:59.809000 audit[2987]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2907 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:59.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931393936356533346234306138393266353564613162666165376131 Jan 23 18:27:59.809000 audit: BPF prog-id=143 op=UNLOAD Jan 23 18:27:59.809000 audit[2987]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2907 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:59.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931393936356533346234306138393266353564613162666165376131 Jan 23 18:27:59.809000 audit: BPF prog-id=145 op=LOAD Jan 23 18:27:59.809000 audit[2987]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2907 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:27:59.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931393936356533346234306138393266353564613162666165376131 Jan 23 18:27:59.849492 containerd[1632]: time="2026-01-23T18:27:59.849339042Z" level=info msg="StartContainer for \"919965e34b40a892f55da1bfae7a19be335f1852b070584d1ff974b601dba3f7\" returns successfully" Jan 23 18:28:00.216000 audit[3056]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3056 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:28:00.216000 audit[3056]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff3ae47b10 a2=0 a3=7fff3ae47afc items=0 ppid=3005 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.216000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 23 18:28:00.216000 audit[3057]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:28:00.216000 audit[3057]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe3e9bb110 a2=0 a3=7ffe3e9bb0fc items=0 ppid=3005 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.216000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 23 18:28:00.219000 audit[3059]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:28:00.219000 audit[3059]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc69e5c4c0 a2=0 a3=7ffc69e5c4ac items=0 ppid=3005 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.219000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 23 18:28:00.219000 audit[3058]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3058 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:28:00.219000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe7df3d560 a2=0 a3=7ffe7df3d54c items=0 ppid=3005 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.219000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 23 18:28:00.221000 audit[3060]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=3060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:28:00.221000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffdb800be0 a2=0 a3=7fffdb800bcc items=0 ppid=3005 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.221000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 23 18:28:00.221000 audit[3061]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3061 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:28:00.221000 audit[3061]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe23be6480 a2=0 a3=7ffe23be646c items=0 ppid=3005 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.221000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 23 18:28:00.339000 audit[3065]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:28:00.339000 audit[3065]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffdbc0bdd0 a2=0 a3=7fffdbc0bdbc items=0 ppid=3005 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.339000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 23 18:28:00.422000 audit[3067]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:28:00.422000 audit[3067]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffefd2ab200 a2=0 a3=7ffefd2ab1ec items=0 ppid=3005 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.422000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 23 18:28:00.434000 audit[3070]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3070 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:28:00.434000 audit[3070]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff33910f90 a2=0 a3=7fff33910f7c items=0 ppid=3005 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.434000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 23 18:28:00.437000 audit[3071]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:28:00.437000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe796edc50 a2=0 a3=7ffe796edc3c items=0 ppid=3005 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.437000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 23 18:28:00.445000 audit[3073]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3073 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:28:00.445000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc42ae0080 a2=0 a3=7ffc42ae006c items=0 ppid=3005 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.445000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 23 18:28:00.449000 audit[3074]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3074 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:28:00.449000 audit[3074]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff298346c0 a2=0 a3=7fff298346ac items=0 ppid=3005 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.449000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 23 18:28:00.456000 audit[3076]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3076 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:28:00.456000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffb67e2590 a2=0 a3=7fffb67e257c items=0 ppid=3005 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.456000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 23 18:28:00.466000 audit[3079]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3079 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:28:00.466000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc9d829850 a2=0 a3=7ffc9d82983c items=0 ppid=3005 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.466000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 23 18:28:00.469000 audit[3080]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:28:00.469000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe166b0f90 a2=0 a3=7ffe166b0f7c items=0 ppid=3005 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.469000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 23 18:28:00.475000 audit[3082]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3082 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:28:00.475000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe20422e10 a2=0 a3=7ffe20422dfc items=0 ppid=3005 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.475000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 23 18:28:00.478000 audit[3083]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3083 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:28:00.478000 audit[3083]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd31afa240 a2=0 a3=7ffd31afa22c items=0 ppid=3005 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.478000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 23 18:28:00.485268 kubelet[2842]: E0123 18:28:00.484982 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:00.486000 audit[3085]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3085 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:28:00.486000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe71681470 a2=0 a3=7ffe7168145c items=0 ppid=3005 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.486000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 23 18:28:00.497000 audit[3088]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3088 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:28:00.497000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc326c2c00 a2=0 a3=7ffc326c2bec items=0 ppid=3005 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.497000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 23 18:28:00.503251 kubelet[2842]: I0123 18:28:00.503095 2842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pd6td" podStartSLOduration=2.503077103 podStartE2EDuration="2.503077103s" podCreationTimestamp="2026-01-23 18:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:28:00.502541952 +0000 UTC m=+6.463824935" watchObservedRunningTime="2026-01-23 18:28:00.503077103 +0000 UTC m=+6.464360096" Jan 23 18:28:00.509000 audit[3091]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:28:00.509000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffce739ba30 a2=0 a3=7ffce739ba1c items=0 ppid=3005 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.509000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 23 18:28:00.512000 audit[3092]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3092 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:28:00.512000 audit[3092]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd702a6670 a2=0 a3=7ffd702a665c items=0 ppid=3005 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.512000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 23 18:28:00.518000 audit[3094]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3094 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:28:00.518000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffcf01a7660 a2=0 a3=7ffcf01a764c items=0 ppid=3005 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.518000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:28:00.529000 audit[3097]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3097 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:28:00.529000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc678ad710 a2=0 a3=7ffc678ad6fc items=0 ppid=3005 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.529000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:28:00.532000 audit[3098]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:28:00.532000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffccd2f3460 a2=0 a3=7ffccd2f344c items=0 ppid=3005 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.532000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 23 18:28:00.541000 audit[3100]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3100 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:28:00.541000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe19156a20 a2=0 a3=7ffe19156a0c items=0 ppid=3005 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.541000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 23 18:28:00.596000 audit[3106]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:00.596000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe270664f0 a2=0 a3=7ffe270664dc items=0 ppid=3005 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.596000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:00.612000 audit[3106]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:00.612000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe270664f0 a2=0 a3=7ffe270664dc items=0 ppid=3005 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.612000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:00.617000 audit[3111]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:28:00.617000 audit[3111]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc9f6c2440 a2=0 a3=7ffc9f6c242c items=0 ppid=3005 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.617000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 23 18:28:00.626000 audit[3113]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3113 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:28:00.626000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe4e29cc80 a2=0 a3=7ffe4e29cc6c items=0 ppid=3005 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.626000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 23 18:28:00.638000 audit[3116]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3116 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:28:00.638000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe87614360 a2=0 a3=7ffe8761434c items=0 ppid=3005 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.638000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 23 18:28:00.642000 audit[3117]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3117 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:28:00.642000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd4dfba20 a2=0 a3=7ffdd4dfba0c items=0 ppid=3005 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.642000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 23 18:28:00.652000 audit[3119]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3119 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:28:00.652000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffc9d158e0 a2=0 a3=7fffc9d158cc items=0 ppid=3005 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.652000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 23 18:28:00.655000 audit[3120]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3120 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:28:00.655000 audit[3120]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdbf9ddba0 a2=0 a3=7ffdbf9ddb8c items=0 ppid=3005 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.655000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 23 18:28:00.663000 audit[3122]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3122 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:28:00.663000 audit[3122]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe8e5ebd30 a2=0 a3=7ffe8e5ebd1c items=0 ppid=3005 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.663000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 23 18:28:00.673000 audit[3125]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3125 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:28:00.673000 audit[3125]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc2c1e66d0 a2=0 a3=7ffc2c1e66bc items=0 ppid=3005 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.673000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 23 18:28:00.676000 audit[3126]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3126 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:28:00.676000 audit[3126]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe85aa940 a2=0 a3=7fffe85aa92c items=0 ppid=3005 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.676000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 23 18:28:00.682000 audit[3128]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3128 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:28:00.682000 audit[3128]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc49d35f60 a2=0 a3=7ffc49d35f4c items=0 ppid=3005 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.682000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 23 18:28:00.686000 audit[3129]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3129 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:28:00.686000 audit[3129]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2f0e8040 a2=0 a3=7ffe2f0e802c items=0 ppid=3005 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.686000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 23 18:28:00.694000 audit[3131]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3131 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:28:00.694000 audit[3131]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd20982b70 a2=0 a3=7ffd20982b5c items=0 ppid=3005 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.694000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 23 18:28:00.704000 audit[3134]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3134 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:28:00.704000 audit[3134]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffe147fbe0 a2=0 a3=7fffe147fbcc items=0 ppid=3005 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.704000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 23 18:28:00.714000 audit[3137]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3137 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:28:00.714000 audit[3137]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeabfbf9c0 a2=0 a3=7ffeabfbf9ac items=0 ppid=3005 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.714000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 23 18:28:00.717000 audit[3138]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3138 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:28:00.717000 audit[3138]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd5a8a3450 a2=0 a3=7ffd5a8a343c items=0 ppid=3005 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.717000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 23 18:28:00.723000 audit[3140]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3140 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:28:00.723000 audit[3140]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd3c71fdf0 a2=0 a3=7ffd3c71fddc items=0 ppid=3005 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.723000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:28:00.733000 audit[3143]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3143 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:28:00.733000 audit[3143]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffef2d8e180 a2=0 a3=7ffef2d8e16c items=0 ppid=3005 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.733000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:28:00.736000 audit[3144]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3144 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:28:00.736000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff24306120 a2=0 a3=7fff2430610c items=0 ppid=3005 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.736000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 23 18:28:00.741000 audit[3146]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3146 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:28:00.741000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffdc0bdf570 a2=0 a3=7ffdc0bdf55c items=0 ppid=3005 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.741000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 23 18:28:00.752000 audit[3147]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3147 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:28:00.752000 audit[3147]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd1703a0b0 a2=0 a3=7ffd1703a09c items=0 ppid=3005 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.752000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 23 18:28:00.758000 audit[3149]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3149 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:28:00.758000 audit[3149]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffb9c60b40 a2=0 a3=7fffb9c60b2c items=0 ppid=3005 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.758000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 18:28:00.766000 audit[3152]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:28:00.766000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe92d53b50 a2=0 a3=7ffe92d53b3c items=0 ppid=3005 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.766000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 18:28:00.773000 audit[3154]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3154 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 23 18:28:00.773000 audit[3154]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffd82997850 a2=0 a3=7ffd8299783c items=0 ppid=3005 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.773000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:00.774000 audit[3154]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3154 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 23 18:28:00.774000 audit[3154]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffd82997850 a2=0 a3=7ffd8299783c items=0 ppid=3005 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:00.774000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:00.875542 kubelet[2842]: E0123 18:28:00.875143 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:00.880945 kubelet[2842]: E0123 18:28:00.880784 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:01.492982 kubelet[2842]: E0123 18:28:01.492766 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:01.493111 kubelet[2842]: E0123 18:28:01.492983 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:01.521910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1741442128.mount: Deactivated successfully. Jan 23 18:28:02.494990 kubelet[2842]: E0123 18:28:02.494824 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:03.767211 containerd[1632]: time="2026-01-23T18:28:03.766979922Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:28:03.768527 containerd[1632]: time="2026-01-23T18:28:03.768105902Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 23 18:28:03.769558 containerd[1632]: time="2026-01-23T18:28:03.769507011Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:28:03.773003 containerd[1632]: time="2026-01-23T18:28:03.772851755Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:28:03.773898 containerd[1632]: time="2026-01-23T18:28:03.773813649Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.064548906s" Jan 23 18:28:03.773898 containerd[1632]: time="2026-01-23T18:28:03.773885656Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 23 18:28:03.780874 containerd[1632]: time="2026-01-23T18:28:03.780815340Z" level=info msg="CreateContainer within sandbox \"19ce1afc582a186d082aebd215be62bcbb2f53b85bbdef2abadb6be240493776\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 18:28:03.794713 containerd[1632]: time="2026-01-23T18:28:03.794554003Z" level=info msg="Container 47b5648ea42f67cf45c17af25d12566123fe73f895a505a9e35272d9de4c1ec6: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:28:03.803049 containerd[1632]: time="2026-01-23T18:28:03.802903841Z" level=info msg="CreateContainer within sandbox \"19ce1afc582a186d082aebd215be62bcbb2f53b85bbdef2abadb6be240493776\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"47b5648ea42f67cf45c17af25d12566123fe73f895a505a9e35272d9de4c1ec6\"" Jan 23 18:28:03.803740 containerd[1632]: time="2026-01-23T18:28:03.803664285Z" level=info msg="StartContainer for \"47b5648ea42f67cf45c17af25d12566123fe73f895a505a9e35272d9de4c1ec6\"" Jan 23 18:28:03.805434 containerd[1632]: time="2026-01-23T18:28:03.805313739Z" level=info msg="connecting to shim 47b5648ea42f67cf45c17af25d12566123fe73f895a505a9e35272d9de4c1ec6" address="unix:///run/containerd/s/aec75b0478873d585b2066928f1ec21a44d57b6a10a493bf15a09d5a594bb872" protocol=ttrpc version=3 Jan 23 18:28:03.839676 systemd[1]: Started cri-containerd-47b5648ea42f67cf45c17af25d12566123fe73f895a505a9e35272d9de4c1ec6.scope - libcontainer container 47b5648ea42f67cf45c17af25d12566123fe73f895a505a9e35272d9de4c1ec6. Jan 23 18:28:03.860000 audit: BPF prog-id=146 op=LOAD Jan 23 18:28:03.861000 audit: BPF prog-id=147 op=LOAD Jan 23 18:28:03.861000 audit[3163]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2933 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:03.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437623536343865613432663637636634356331376166323564313235 Jan 23 18:28:03.861000 audit: BPF prog-id=147 op=UNLOAD Jan 23 18:28:03.861000 audit[3163]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2933 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:03.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437623536343865613432663637636634356331376166323564313235 Jan 23 18:28:03.862000 audit: BPF prog-id=148 op=LOAD Jan 23 18:28:03.862000 audit[3163]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2933 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:03.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437623536343865613432663637636634356331376166323564313235 Jan 23 18:28:03.862000 audit: BPF prog-id=149 op=LOAD Jan 23 18:28:03.862000 audit[3163]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2933 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:03.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437623536343865613432663637636634356331376166323564313235 Jan 23 18:28:03.862000 audit: BPF prog-id=149 op=UNLOAD Jan 23 18:28:03.862000 audit[3163]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2933 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:03.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437623536343865613432663637636634356331376166323564313235 Jan 23 18:28:03.862000 audit: BPF prog-id=148 op=UNLOAD Jan 23 18:28:03.862000 audit[3163]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2933 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:03.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437623536343865613432663637636634356331376166323564313235 Jan 23 18:28:03.862000 audit: BPF prog-id=150 op=LOAD Jan 23 18:28:03.862000 audit[3163]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2933 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:03.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437623536343865613432663637636634356331376166323564313235 Jan 23 18:28:03.893779 containerd[1632]: time="2026-01-23T18:28:03.893673350Z" level=info msg="StartContainer for \"47b5648ea42f67cf45c17af25d12566123fe73f895a505a9e35272d9de4c1ec6\" returns successfully" Jan 23 18:28:04.515892 kubelet[2842]: I0123 18:28:04.515684 2842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-8t99c" podStartSLOduration=2.443370618 podStartE2EDuration="6.515667228s" podCreationTimestamp="2026-01-23 18:27:58 +0000 UTC" firstStartedPulling="2026-01-23 18:27:59.703062263 +0000 UTC m=+5.664345246" lastFinishedPulling="2026-01-23 18:28:03.775358873 +0000 UTC m=+9.736641856" observedRunningTime="2026-01-23 18:28:04.515270064 +0000 UTC m=+10.476553047" watchObservedRunningTime="2026-01-23 18:28:04.515667228 +0000 UTC m=+10.476950212" Jan 23 18:28:06.297053 systemd[1]: cri-containerd-47b5648ea42f67cf45c17af25d12566123fe73f895a505a9e35272d9de4c1ec6.scope: Deactivated successfully. Jan 23 18:28:06.309324 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 23 18:28:06.310221 kernel: audit: type=1334 audit(1769192886.301:519): prog-id=146 op=UNLOAD Jan 23 18:28:06.301000 audit: BPF prog-id=146 op=UNLOAD Jan 23 18:28:06.310321 containerd[1632]: time="2026-01-23T18:28:06.309862525Z" level=info msg="received container exit event container_id:\"47b5648ea42f67cf45c17af25d12566123fe73f895a505a9e35272d9de4c1ec6\" id:\"47b5648ea42f67cf45c17af25d12566123fe73f895a505a9e35272d9de4c1ec6\" pid:3176 exit_status:1 exited_at:{seconds:1769192886 nanos:302906083}" Jan 23 18:28:06.301000 audit: BPF prog-id=150 op=UNLOAD Jan 23 18:28:06.314489 kernel: audit: type=1334 audit(1769192886.301:520): prog-id=150 op=UNLOAD Jan 23 18:28:06.363280 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47b5648ea42f67cf45c17af25d12566123fe73f895a505a9e35272d9de4c1ec6-rootfs.mount: Deactivated successfully. Jan 23 18:28:07.524004 kubelet[2842]: I0123 18:28:07.523835 2842 scope.go:117] "RemoveContainer" containerID="47b5648ea42f67cf45c17af25d12566123fe73f895a505a9e35272d9de4c1ec6" Jan 23 18:28:07.532144 containerd[1632]: time="2026-01-23T18:28:07.532090116Z" level=info msg="CreateContainer within sandbox \"19ce1afc582a186d082aebd215be62bcbb2f53b85bbdef2abadb6be240493776\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 23 18:28:07.558452 containerd[1632]: time="2026-01-23T18:28:07.558271245Z" level=info msg="Container 8a909fe90d532178fb6ee49b6795ec562d5f6b31af0c887637f1460f3088fb34: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:28:07.572843 containerd[1632]: time="2026-01-23T18:28:07.572764192Z" level=info msg="CreateContainer within sandbox \"19ce1afc582a186d082aebd215be62bcbb2f53b85bbdef2abadb6be240493776\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"8a909fe90d532178fb6ee49b6795ec562d5f6b31af0c887637f1460f3088fb34\"" Jan 23 18:28:07.573809 containerd[1632]: time="2026-01-23T18:28:07.573778347Z" level=info msg="StartContainer for \"8a909fe90d532178fb6ee49b6795ec562d5f6b31af0c887637f1460f3088fb34\"" Jan 23 18:28:07.576035 containerd[1632]: time="2026-01-23T18:28:07.575997449Z" level=info msg="connecting to shim 8a909fe90d532178fb6ee49b6795ec562d5f6b31af0c887637f1460f3088fb34" address="unix:///run/containerd/s/aec75b0478873d585b2066928f1ec21a44d57b6a10a493bf15a09d5a594bb872" protocol=ttrpc version=3 Jan 23 18:28:07.625811 systemd[1]: Started cri-containerd-8a909fe90d532178fb6ee49b6795ec562d5f6b31af0c887637f1460f3088fb34.scope - libcontainer container 8a909fe90d532178fb6ee49b6795ec562d5f6b31af0c887637f1460f3088fb34. Jan 23 18:28:07.685000 audit: BPF prog-id=151 op=LOAD Jan 23 18:28:07.691467 kernel: audit: type=1334 audit(1769192887.685:521): prog-id=151 op=LOAD Jan 23 18:28:07.691541 kernel: audit: type=1334 audit(1769192887.686:522): prog-id=152 op=LOAD Jan 23 18:28:07.686000 audit: BPF prog-id=152 op=LOAD Jan 23 18:28:07.686000 audit[3244]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2933 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:07.686000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861393039666539306435333231373866623665653439623637393565 Jan 23 18:28:07.720933 kernel: audit: type=1300 audit(1769192887.686:522): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2933 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:07.721105 kernel: audit: type=1327 audit(1769192887.686:522): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861393039666539306435333231373866623665653439623637393565 Jan 23 18:28:07.721176 kernel: audit: type=1334 audit(1769192887.686:523): prog-id=152 op=UNLOAD Jan 23 18:28:07.686000 audit: BPF prog-id=152 op=UNLOAD Jan 23 18:28:07.737786 kernel: audit: type=1300 audit(1769192887.686:523): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2933 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:07.686000 audit[3244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2933 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:07.686000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861393039666539306435333231373866623665653439623637393565 Jan 23 18:28:07.751157 kernel: audit: type=1327 audit(1769192887.686:523): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861393039666539306435333231373866623665653439623637393565 Jan 23 18:28:07.751499 kernel: audit: type=1334 audit(1769192887.686:524): prog-id=153 op=LOAD Jan 23 18:28:07.686000 audit: BPF prog-id=153 op=LOAD Jan 23 18:28:07.686000 audit[3244]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2933 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:07.686000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861393039666539306435333231373866623665653439623637393565 Jan 23 18:28:07.686000 audit: BPF prog-id=154 op=LOAD Jan 23 18:28:07.686000 audit[3244]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2933 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:07.686000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861393039666539306435333231373866623665653439623637393565 Jan 23 18:28:07.686000 audit: BPF prog-id=154 op=UNLOAD Jan 23 18:28:07.686000 audit[3244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2933 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:07.686000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861393039666539306435333231373866623665653439623637393565 Jan 23 18:28:07.687000 audit: BPF prog-id=153 op=UNLOAD Jan 23 18:28:07.687000 audit[3244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2933 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:07.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861393039666539306435333231373866623665653439623637393565 Jan 23 18:28:07.687000 audit: BPF prog-id=155 op=LOAD Jan 23 18:28:07.687000 audit[3244]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2933 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:07.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861393039666539306435333231373866623665653439623637393565 Jan 23 18:28:07.760362 containerd[1632]: time="2026-01-23T18:28:07.760311479Z" level=info msg="StartContainer for \"8a909fe90d532178fb6ee49b6795ec562d5f6b31af0c887637f1460f3088fb34\" returns successfully" Jan 23 18:28:09.989021 sudo[1827]: pam_unix(sudo:session): session closed for user root Jan 23 18:28:09.987000 audit[1827]: USER_END pid=1827 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:28:09.988000 audit[1827]: CRED_DISP pid=1827 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:28:09.996618 sshd[1826]: Connection closed by 10.0.0.1 port 58134 Jan 23 18:28:09.998669 sshd-session[1822]: pam_unix(sshd:session): session closed for user core Jan 23 18:28:10.000000 audit[1822]: USER_END pid=1822 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:28:10.001000 audit[1822]: CRED_DISP pid=1822 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:28:10.006599 systemd[1]: sshd@6-10.0.0.29:22-10.0.0.1:58134.service: Deactivated successfully. Jan 23 18:28:10.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.29:22-10.0.0.1:58134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:28:10.009763 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 18:28:10.010107 systemd[1]: session-8.scope: Consumed 12.690s CPU time, 217.3M memory peak. Jan 23 18:28:10.012464 systemd-logind[1586]: Session 8 logged out. Waiting for processes to exit. Jan 23 18:28:10.014499 systemd-logind[1586]: Removed session 8. Jan 23 18:28:12.300000 audit[3303]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3303 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:12.311716 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 23 18:28:12.311870 kernel: audit: type=1325 audit(1769192892.300:534): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3303 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:12.300000 audit[3303]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff66eca530 a2=0 a3=7fff66eca51c items=0 ppid=3005 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:12.325787 kernel: audit: type=1300 audit(1769192892.300:534): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff66eca530 a2=0 a3=7fff66eca51c items=0 ppid=3005 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:12.325886 kernel: audit: type=1327 audit(1769192892.300:534): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:12.300000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:12.331764 kernel: audit: type=1325 audit(1769192892.325:535): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3303 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:12.325000 audit[3303]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3303 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:12.325000 audit[3303]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff66eca530 a2=0 a3=0 items=0 ppid=3005 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:12.365512 kernel: audit: type=1300 audit(1769192892.325:535): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff66eca530 a2=0 a3=0 items=0 ppid=3005 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:12.325000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:12.374351 kernel: audit: type=1327 audit(1769192892.325:535): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:12.408000 audit[3305]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3305 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:12.408000 audit[3305]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffeb8ef9b40 a2=0 a3=7ffeb8ef9b2c items=0 ppid=3005 pid=3305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:12.433516 kernel: audit: type=1325 audit(1769192892.408:536): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3305 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:12.433675 kernel: audit: type=1300 audit(1769192892.408:536): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffeb8ef9b40 a2=0 a3=7ffeb8ef9b2c items=0 ppid=3005 pid=3305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:12.433727 kernel: audit: type=1327 audit(1769192892.408:536): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:12.408000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:12.440937 kernel: audit: type=1325 audit(1769192892.436:537): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3305 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:12.436000 audit[3305]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3305 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:12.436000 audit[3305]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeb8ef9b40 a2=0 a3=0 items=0 ppid=3005 pid=3305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:12.436000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:23.601515 kubelet[2842]: E0123 18:28:23.599022 2842 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.313s" Jan 23 18:28:25.447513 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 23 18:28:25.447898 kernel: audit: type=1325 audit(1769192905.432:538): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3312 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:25.432000 audit[3312]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3312 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:25.432000 audit[3312]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff9afa73d0 a2=0 a3=7fff9afa73bc items=0 ppid=3005 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:25.432000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:25.478050 kernel: audit: type=1300 audit(1769192905.432:538): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff9afa73d0 a2=0 a3=7fff9afa73bc items=0 ppid=3005 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:25.478240 kernel: audit: type=1327 audit(1769192905.432:538): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:25.479719 kernel: audit: type=1325 audit(1769192905.477:539): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3312 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:25.477000 audit[3312]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3312 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:25.477000 audit[3312]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff9afa73d0 a2=0 a3=0 items=0 ppid=3005 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:25.506030 kernel: audit: type=1300 audit(1769192905.477:539): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff9afa73d0 a2=0 a3=0 items=0 ppid=3005 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:25.477000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:25.515597 kernel: audit: type=1327 audit(1769192905.477:539): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:25.614000 audit[3314]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3314 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:25.626602 kernel: audit: type=1325 audit(1769192905.614:540): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3314 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:25.614000 audit[3314]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe1bc37bf0 a2=0 a3=7ffe1bc37bdc items=0 ppid=3005 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:25.654533 kernel: audit: type=1300 audit(1769192905.614:540): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe1bc37bf0 a2=0 a3=7ffe1bc37bdc items=0 ppid=3005 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:25.614000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:25.629000 audit[3314]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3314 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:25.673241 kernel: audit: type=1327 audit(1769192905.614:540): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:25.673638 kernel: audit: type=1325 audit(1769192905.629:541): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3314 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:25.629000 audit[3314]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe1bc37bf0 a2=0 a3=0 items=0 ppid=3005 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:25.629000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:27.289783 kubelet[2842]: I0123 18:28:27.289313 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3adf34f2-64b7-4d71-96e5-5ea860f0c1f8-typha-certs\") pod \"calico-typha-778fc47968-fp9lz\" (UID: \"3adf34f2-64b7-4d71-96e5-5ea860f0c1f8\") " pod="calico-system/calico-typha-778fc47968-fp9lz" Jan 23 18:28:27.292275 kubelet[2842]: I0123 18:28:27.290192 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3adf34f2-64b7-4d71-96e5-5ea860f0c1f8-tigera-ca-bundle\") pod \"calico-typha-778fc47968-fp9lz\" (UID: \"3adf34f2-64b7-4d71-96e5-5ea860f0c1f8\") " pod="calico-system/calico-typha-778fc47968-fp9lz" Jan 23 18:28:27.292275 kubelet[2842]: I0123 18:28:27.290232 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxsl8\" (UniqueName: \"kubernetes.io/projected/3adf34f2-64b7-4d71-96e5-5ea860f0c1f8-kube-api-access-nxsl8\") pod \"calico-typha-778fc47968-fp9lz\" (UID: \"3adf34f2-64b7-4d71-96e5-5ea860f0c1f8\") " pod="calico-system/calico-typha-778fc47968-fp9lz" Jan 23 18:28:27.295659 systemd[1]: Created slice kubepods-besteffort-pod3adf34f2_64b7_4d71_96e5_5ea860f0c1f8.slice - libcontainer container kubepods-besteffort-pod3adf34f2_64b7_4d71_96e5_5ea860f0c1f8.slice. Jan 23 18:28:27.295000 audit[3316]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3316 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:27.295000 audit[3316]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd874f7110 a2=0 a3=7ffd874f70fc items=0 ppid=3005 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:27.295000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:27.305000 audit[3316]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3316 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:27.305000 audit[3316]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd874f7110 a2=0 a3=0 items=0 ppid=3005 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:27.305000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:27.376213 systemd[1]: Created slice kubepods-besteffort-pod68484ee7_7a5d_4ea1_96b4_a11c747d4ade.slice - libcontainer container kubepods-besteffort-pod68484ee7_7a5d_4ea1_96b4_a11c747d4ade.slice. Jan 23 18:28:27.391645 kubelet[2842]: I0123 18:28:27.391149 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/68484ee7-7a5d-4ea1-96b4-a11c747d4ade-cni-log-dir\") pod \"calico-node-lwh25\" (UID: \"68484ee7-7a5d-4ea1-96b4-a11c747d4ade\") " pod="calico-system/calico-node-lwh25" Jan 23 18:28:27.391645 kubelet[2842]: I0123 18:28:27.391183 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/68484ee7-7a5d-4ea1-96b4-a11c747d4ade-cni-bin-dir\") pod \"calico-node-lwh25\" (UID: \"68484ee7-7a5d-4ea1-96b4-a11c747d4ade\") " pod="calico-system/calico-node-lwh25" Jan 23 18:28:27.391645 kubelet[2842]: I0123 18:28:27.391197 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/68484ee7-7a5d-4ea1-96b4-a11c747d4ade-var-run-calico\") pod \"calico-node-lwh25\" (UID: \"68484ee7-7a5d-4ea1-96b4-a11c747d4ade\") " pod="calico-system/calico-node-lwh25" Jan 23 18:28:27.391645 kubelet[2842]: I0123 18:28:27.391212 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/68484ee7-7a5d-4ea1-96b4-a11c747d4ade-policysync\") pod \"calico-node-lwh25\" (UID: \"68484ee7-7a5d-4ea1-96b4-a11c747d4ade\") " pod="calico-system/calico-node-lwh25" Jan 23 18:28:27.391645 kubelet[2842]: I0123 18:28:27.391225 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68484ee7-7a5d-4ea1-96b4-a11c747d4ade-tigera-ca-bundle\") pod \"calico-node-lwh25\" (UID: \"68484ee7-7a5d-4ea1-96b4-a11c747d4ade\") " pod="calico-system/calico-node-lwh25" Jan 23 18:28:27.391948 kubelet[2842]: I0123 18:28:27.391239 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/68484ee7-7a5d-4ea1-96b4-a11c747d4ade-flexvol-driver-host\") pod \"calico-node-lwh25\" (UID: \"68484ee7-7a5d-4ea1-96b4-a11c747d4ade\") " pod="calico-system/calico-node-lwh25" Jan 23 18:28:27.391948 kubelet[2842]: I0123 18:28:27.391285 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68484ee7-7a5d-4ea1-96b4-a11c747d4ade-xtables-lock\") pod \"calico-node-lwh25\" (UID: \"68484ee7-7a5d-4ea1-96b4-a11c747d4ade\") " pod="calico-system/calico-node-lwh25" Jan 23 18:28:27.391948 kubelet[2842]: I0123 18:28:27.391310 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdbl2\" (UniqueName: \"kubernetes.io/projected/68484ee7-7a5d-4ea1-96b4-a11c747d4ade-kube-api-access-bdbl2\") pod \"calico-node-lwh25\" (UID: \"68484ee7-7a5d-4ea1-96b4-a11c747d4ade\") " pod="calico-system/calico-node-lwh25" Jan 23 18:28:27.391948 kubelet[2842]: I0123 18:28:27.391325 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/68484ee7-7a5d-4ea1-96b4-a11c747d4ade-cni-net-dir\") pod \"calico-node-lwh25\" (UID: \"68484ee7-7a5d-4ea1-96b4-a11c747d4ade\") " pod="calico-system/calico-node-lwh25" Jan 23 18:28:27.391948 kubelet[2842]: I0123 18:28:27.391340 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/68484ee7-7a5d-4ea1-96b4-a11c747d4ade-var-lib-calico\") pod \"calico-node-lwh25\" (UID: \"68484ee7-7a5d-4ea1-96b4-a11c747d4ade\") " pod="calico-system/calico-node-lwh25" Jan 23 18:28:27.392098 kubelet[2842]: I0123 18:28:27.391354 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/68484ee7-7a5d-4ea1-96b4-a11c747d4ade-node-certs\") pod \"calico-node-lwh25\" (UID: \"68484ee7-7a5d-4ea1-96b4-a11c747d4ade\") " pod="calico-system/calico-node-lwh25" Jan 23 18:28:27.392098 kubelet[2842]: I0123 18:28:27.391472 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68484ee7-7a5d-4ea1-96b4-a11c747d4ade-lib-modules\") pod \"calico-node-lwh25\" (UID: \"68484ee7-7a5d-4ea1-96b4-a11c747d4ade\") " pod="calico-system/calico-node-lwh25" Jan 23 18:28:27.486297 kubelet[2842]: E0123 18:28:27.486100 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fbkc5" podUID="420164f1-10e4-4309-843a-9bf4c7513aff" Jan 23 18:28:27.491722 kubelet[2842]: I0123 18:28:27.491686 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/420164f1-10e4-4309-843a-9bf4c7513aff-registration-dir\") pod \"csi-node-driver-fbkc5\" (UID: \"420164f1-10e4-4309-843a-9bf4c7513aff\") " pod="calico-system/csi-node-driver-fbkc5" Jan 23 18:28:27.492467 kubelet[2842]: I0123 18:28:27.492085 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/420164f1-10e4-4309-843a-9bf4c7513aff-socket-dir\") pod \"csi-node-driver-fbkc5\" (UID: \"420164f1-10e4-4309-843a-9bf4c7513aff\") " pod="calico-system/csi-node-driver-fbkc5" Jan 23 18:28:27.492467 kubelet[2842]: I0123 18:28:27.492137 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/420164f1-10e4-4309-843a-9bf4c7513aff-varrun\") pod \"csi-node-driver-fbkc5\" (UID: \"420164f1-10e4-4309-843a-9bf4c7513aff\") " pod="calico-system/csi-node-driver-fbkc5" Jan 23 18:28:27.492467 kubelet[2842]: I0123 18:28:27.492151 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/420164f1-10e4-4309-843a-9bf4c7513aff-kubelet-dir\") pod \"csi-node-driver-fbkc5\" (UID: \"420164f1-10e4-4309-843a-9bf4c7513aff\") " pod="calico-system/csi-node-driver-fbkc5" Jan 23 18:28:27.492467 kubelet[2842]: I0123 18:28:27.492182 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-454cg\" (UniqueName: \"kubernetes.io/projected/420164f1-10e4-4309-843a-9bf4c7513aff-kube-api-access-454cg\") pod \"csi-node-driver-fbkc5\" (UID: \"420164f1-10e4-4309-843a-9bf4c7513aff\") " pod="calico-system/csi-node-driver-fbkc5" Jan 23 18:28:27.496462 kubelet[2842]: E0123 18:28:27.495933 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.496462 kubelet[2842]: W0123 18:28:27.496058 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.496462 kubelet[2842]: E0123 18:28:27.496123 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.502962 kubelet[2842]: E0123 18:28:27.502843 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.502962 kubelet[2842]: W0123 18:28:27.502868 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.503128 kubelet[2842]: E0123 18:28:27.502887 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.504732 kubelet[2842]: E0123 18:28:27.504657 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.504804 kubelet[2842]: W0123 18:28:27.504758 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.504804 kubelet[2842]: E0123 18:28:27.504773 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.505300 kubelet[2842]: E0123 18:28:27.505213 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.505300 kubelet[2842]: W0123 18:28:27.505319 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.505300 kubelet[2842]: E0123 18:28:27.505337 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.507138 kubelet[2842]: E0123 18:28:27.507103 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.507138 kubelet[2842]: W0123 18:28:27.507138 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.507220 kubelet[2842]: E0123 18:28:27.507150 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.518904 kubelet[2842]: E0123 18:28:27.514545 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.518904 kubelet[2842]: W0123 18:28:27.514578 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.518904 kubelet[2842]: E0123 18:28:27.514594 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.528942 kubelet[2842]: E0123 18:28:27.528664 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.529818 kubelet[2842]: W0123 18:28:27.529638 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.529818 kubelet[2842]: E0123 18:28:27.529673 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.593896 kubelet[2842]: E0123 18:28:27.593683 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.593896 kubelet[2842]: W0123 18:28:27.593738 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.593896 kubelet[2842]: E0123 18:28:27.593763 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.594866 kubelet[2842]: E0123 18:28:27.594532 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.594866 kubelet[2842]: W0123 18:28:27.594551 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.594866 kubelet[2842]: E0123 18:28:27.594624 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.595908 kubelet[2842]: E0123 18:28:27.595782 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.595908 kubelet[2842]: W0123 18:28:27.595812 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.595908 kubelet[2842]: E0123 18:28:27.595845 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.596551 kubelet[2842]: E0123 18:28:27.596501 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.596551 kubelet[2842]: W0123 18:28:27.596538 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.596551 kubelet[2842]: E0123 18:28:27.596551 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.597451 kubelet[2842]: E0123 18:28:27.596956 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.597451 kubelet[2842]: W0123 18:28:27.597034 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.597451 kubelet[2842]: E0123 18:28:27.597045 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.597624 kubelet[2842]: E0123 18:28:27.597541 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.597624 kubelet[2842]: W0123 18:28:27.597603 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.597624 kubelet[2842]: E0123 18:28:27.597613 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.598161 kubelet[2842]: E0123 18:28:27.598123 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.598161 kubelet[2842]: W0123 18:28:27.598155 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.598161 kubelet[2842]: E0123 18:28:27.598165 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.598647 kubelet[2842]: E0123 18:28:27.598608 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.598717 kubelet[2842]: W0123 18:28:27.598707 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.598758 kubelet[2842]: E0123 18:28:27.598719 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.599236 kubelet[2842]: E0123 18:28:27.599197 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.599236 kubelet[2842]: W0123 18:28:27.599214 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.599236 kubelet[2842]: E0123 18:28:27.599225 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.599799 kubelet[2842]: E0123 18:28:27.599757 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.599799 kubelet[2842]: W0123 18:28:27.599788 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.599799 kubelet[2842]: E0123 18:28:27.599799 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.600333 kubelet[2842]: E0123 18:28:27.600289 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.600519 kubelet[2842]: W0123 18:28:27.600373 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.600578 kubelet[2842]: E0123 18:28:27.600538 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.601275 kubelet[2842]: E0123 18:28:27.601194 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:27.601275 kubelet[2842]: E0123 18:28:27.601274 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.601370 kubelet[2842]: W0123 18:28:27.601285 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.601370 kubelet[2842]: E0123 18:28:27.601295 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.602222 kubelet[2842]: E0123 18:28:27.602111 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.602222 kubelet[2842]: W0123 18:28:27.602164 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.602222 kubelet[2842]: E0123 18:28:27.602189 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.603017 kubelet[2842]: E0123 18:28:27.602782 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.603017 kubelet[2842]: W0123 18:28:27.602801 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.603017 kubelet[2842]: E0123 18:28:27.602814 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.603735 kubelet[2842]: E0123 18:28:27.603681 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.603854 containerd[1632]: time="2026-01-23T18:28:27.603773961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-778fc47968-fp9lz,Uid:3adf34f2-64b7-4d71-96e5-5ea860f0c1f8,Namespace:calico-system,Attempt:0,}" Jan 23 18:28:27.605182 kubelet[2842]: W0123 18:28:27.604902 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.605182 kubelet[2842]: E0123 18:28:27.604919 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.605664 kubelet[2842]: E0123 18:28:27.605622 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.605664 kubelet[2842]: W0123 18:28:27.605654 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.605664 kubelet[2842]: E0123 18:28:27.605665 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.606460 kubelet[2842]: E0123 18:28:27.606350 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.606789 kubelet[2842]: W0123 18:28:27.606721 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.606789 kubelet[2842]: E0123 18:28:27.606772 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.607212 kubelet[2842]: E0123 18:28:27.607161 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.607212 kubelet[2842]: W0123 18:28:27.607192 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.607212 kubelet[2842]: E0123 18:28:27.607204 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.607721 kubelet[2842]: E0123 18:28:27.607672 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.607721 kubelet[2842]: W0123 18:28:27.607702 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.607721 kubelet[2842]: E0123 18:28:27.607712 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.608492 kubelet[2842]: E0123 18:28:27.608452 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.608492 kubelet[2842]: W0123 18:28:27.608488 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.608563 kubelet[2842]: E0123 18:28:27.608501 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.608924 kubelet[2842]: E0123 18:28:27.608827 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.608924 kubelet[2842]: W0123 18:28:27.608863 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.608924 kubelet[2842]: E0123 18:28:27.608874 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.609730 kubelet[2842]: E0123 18:28:27.609698 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.609770 kubelet[2842]: W0123 18:28:27.609732 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.609770 kubelet[2842]: E0123 18:28:27.609743 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.610264 kubelet[2842]: E0123 18:28:27.610229 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.610264 kubelet[2842]: W0123 18:28:27.610260 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.610329 kubelet[2842]: E0123 18:28:27.610270 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.610786 kubelet[2842]: E0123 18:28:27.610753 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.610917 kubelet[2842]: W0123 18:28:27.610790 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.610917 kubelet[2842]: E0123 18:28:27.610807 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.611476 kubelet[2842]: E0123 18:28:27.611348 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.611476 kubelet[2842]: W0123 18:28:27.611459 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.611476 kubelet[2842]: E0123 18:28:27.611471 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.630513 kubelet[2842]: E0123 18:28:27.630358 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:27.632319 kubelet[2842]: W0123 18:28:27.632276 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:27.632468 kubelet[2842]: E0123 18:28:27.632321 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:27.655756 containerd[1632]: time="2026-01-23T18:28:27.655691401Z" level=info msg="connecting to shim bcc8a4da2105ae97087fc8d44bb4196b30f6f0416d435b4ecf21fe3051ee02e4" address="unix:///run/containerd/s/0a1b1a303d92b750f44720e739b4eea686f1ed9f9fca936a4c11ea216f7d7f06" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:28:27.680281 kubelet[2842]: E0123 18:28:27.680185 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:27.680891 containerd[1632]: time="2026-01-23T18:28:27.680813143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lwh25,Uid:68484ee7-7a5d-4ea1-96b4-a11c747d4ade,Namespace:calico-system,Attempt:0,}" Jan 23 18:28:27.716763 systemd[1]: Started cri-containerd-bcc8a4da2105ae97087fc8d44bb4196b30f6f0416d435b4ecf21fe3051ee02e4.scope - libcontainer container bcc8a4da2105ae97087fc8d44bb4196b30f6f0416d435b4ecf21fe3051ee02e4. Jan 23 18:28:27.747000 audit: BPF prog-id=156 op=LOAD Jan 23 18:28:27.748000 audit: BPF prog-id=157 op=LOAD Jan 23 18:28:27.748000 audit[3381]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3370 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:27.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263633861346461323130356165393730383766633864343462623431 Jan 23 18:28:27.749000 audit: BPF prog-id=157 op=UNLOAD Jan 23 18:28:27.749000 audit[3381]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3370 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:27.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263633861346461323130356165393730383766633864343462623431 Jan 23 18:28:27.749000 audit: BPF prog-id=158 op=LOAD Jan 23 18:28:27.749000 audit[3381]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3370 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:27.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263633861346461323130356165393730383766633864343462623431 Jan 23 18:28:27.749000 audit: BPF prog-id=159 op=LOAD Jan 23 18:28:27.749000 audit[3381]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3370 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:27.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263633861346461323130356165393730383766633864343462623431 Jan 23 18:28:27.750000 audit: BPF prog-id=159 op=UNLOAD Jan 23 18:28:27.750000 audit[3381]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3370 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:27.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263633861346461323130356165393730383766633864343462623431 Jan 23 18:28:27.750000 audit: BPF prog-id=158 op=UNLOAD Jan 23 18:28:27.750000 audit[3381]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3370 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:27.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263633861346461323130356165393730383766633864343462623431 Jan 23 18:28:27.750000 audit: BPF prog-id=160 op=LOAD Jan 23 18:28:27.750000 audit[3381]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3370 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:27.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263633861346461323130356165393730383766633864343462623431 Jan 23 18:28:27.754774 containerd[1632]: time="2026-01-23T18:28:27.754286315Z" level=info msg="connecting to shim fdef5c9101188e7c002780fdd08331d5f17f3a2b42e85e1207be8787941c7f91" address="unix:///run/containerd/s/eeb19d307cd62485163c76b3b8352ea3385e83ba942321cf68e467d6564cc38e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:28:27.818092 systemd[1]: Started cri-containerd-fdef5c9101188e7c002780fdd08331d5f17f3a2b42e85e1207be8787941c7f91.scope - libcontainer container fdef5c9101188e7c002780fdd08331d5f17f3a2b42e85e1207be8787941c7f91. Jan 23 18:28:27.827717 containerd[1632]: time="2026-01-23T18:28:27.827545927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-778fc47968-fp9lz,Uid:3adf34f2-64b7-4d71-96e5-5ea860f0c1f8,Namespace:calico-system,Attempt:0,} returns sandbox id \"bcc8a4da2105ae97087fc8d44bb4196b30f6f0416d435b4ecf21fe3051ee02e4\"" Jan 23 18:28:27.833798 kubelet[2842]: E0123 18:28:27.833701 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:27.837091 containerd[1632]: time="2026-01-23T18:28:27.837055102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 18:28:27.850000 audit: BPF prog-id=161 op=LOAD Jan 23 18:28:27.851000 audit: BPF prog-id=162 op=LOAD Jan 23 18:28:27.851000 audit[3420]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3407 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:27.851000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664656635633931303131383865376330303237383066646430383333 Jan 23 18:28:27.851000 audit: BPF prog-id=162 op=UNLOAD Jan 23 18:28:27.851000 audit[3420]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:27.851000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664656635633931303131383865376330303237383066646430383333 Jan 23 18:28:27.852000 audit: BPF prog-id=163 op=LOAD Jan 23 18:28:27.852000 audit[3420]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3407 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:27.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664656635633931303131383865376330303237383066646430383333 Jan 23 18:28:27.852000 audit: BPF prog-id=164 op=LOAD Jan 23 18:28:27.852000 audit[3420]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3407 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:27.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664656635633931303131383865376330303237383066646430383333 Jan 23 18:28:27.852000 audit: BPF prog-id=164 op=UNLOAD Jan 23 18:28:27.852000 audit[3420]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:27.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664656635633931303131383865376330303237383066646430383333 Jan 23 18:28:27.852000 audit: BPF prog-id=163 op=UNLOAD Jan 23 18:28:27.852000 audit[3420]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:27.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664656635633931303131383865376330303237383066646430383333 Jan 23 18:28:27.852000 audit: BPF prog-id=165 op=LOAD Jan 23 18:28:27.852000 audit[3420]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3407 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:27.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664656635633931303131383865376330303237383066646430383333 Jan 23 18:28:27.892043 containerd[1632]: time="2026-01-23T18:28:27.891858329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lwh25,Uid:68484ee7-7a5d-4ea1-96b4-a11c747d4ade,Namespace:calico-system,Attempt:0,} returns sandbox id \"fdef5c9101188e7c002780fdd08331d5f17f3a2b42e85e1207be8787941c7f91\"" Jan 23 18:28:27.893588 kubelet[2842]: E0123 18:28:27.893473 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:28.354000 audit[3453]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3453 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:28.354000 audit[3453]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd10a49310 a2=0 a3=7ffd10a492fc items=0 ppid=3005 pid=3453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:28.354000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:28.376000 audit[3453]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3453 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:28.376000 audit[3453]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd10a49310 a2=0 a3=0 items=0 ppid=3005 pid=3453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:28.376000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:28.917287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3075186403.mount: Deactivated successfully. Jan 23 18:28:29.411630 kubelet[2842]: E0123 18:28:29.411335 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fbkc5" podUID="420164f1-10e4-4309-843a-9bf4c7513aff" Jan 23 18:28:31.001946 containerd[1632]: time="2026-01-23T18:28:31.001775168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:28:31.003312 containerd[1632]: time="2026-01-23T18:28:31.003224081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 23 18:28:31.005289 containerd[1632]: time="2026-01-23T18:28:31.005172892Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:28:31.009967 containerd[1632]: time="2026-01-23T18:28:31.009846271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:28:31.010285 containerd[1632]: time="2026-01-23T18:28:31.010185910Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.172934855s" Jan 23 18:28:31.010285 containerd[1632]: time="2026-01-23T18:28:31.010247952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 23 18:28:31.012197 containerd[1632]: time="2026-01-23T18:28:31.012131972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 18:28:31.033019 containerd[1632]: time="2026-01-23T18:28:31.032947884Z" level=info msg="CreateContainer within sandbox \"bcc8a4da2105ae97087fc8d44bb4196b30f6f0416d435b4ecf21fe3051ee02e4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 18:28:31.055278 containerd[1632]: time="2026-01-23T18:28:31.055028223Z" level=info msg="Container 6a9452742df827b3f84bcadd23b2482d58957681e4fcb72738c24954237d4eb3: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:28:31.076881 containerd[1632]: time="2026-01-23T18:28:31.076718729Z" level=info msg="CreateContainer within sandbox \"bcc8a4da2105ae97087fc8d44bb4196b30f6f0416d435b4ecf21fe3051ee02e4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6a9452742df827b3f84bcadd23b2482d58957681e4fcb72738c24954237d4eb3\"" Jan 23 18:28:31.079551 containerd[1632]: time="2026-01-23T18:28:31.077928335Z" level=info msg="StartContainer for \"6a9452742df827b3f84bcadd23b2482d58957681e4fcb72738c24954237d4eb3\"" Jan 23 18:28:31.081052 containerd[1632]: time="2026-01-23T18:28:31.080995417Z" level=info msg="connecting to shim 6a9452742df827b3f84bcadd23b2482d58957681e4fcb72738c24954237d4eb3" address="unix:///run/containerd/s/0a1b1a303d92b750f44720e739b4eea686f1ed9f9fca936a4c11ea216f7d7f06" protocol=ttrpc version=3 Jan 23 18:28:31.153914 systemd[1]: Started cri-containerd-6a9452742df827b3f84bcadd23b2482d58957681e4fcb72738c24954237d4eb3.scope - libcontainer container 6a9452742df827b3f84bcadd23b2482d58957681e4fcb72738c24954237d4eb3. Jan 23 18:28:31.179000 audit: BPF prog-id=166 op=LOAD Jan 23 18:28:31.184124 kernel: kauditd_printk_skb: 58 callbacks suppressed Jan 23 18:28:31.184216 kernel: audit: type=1334 audit(1769192911.179:562): prog-id=166 op=LOAD Jan 23 18:28:31.188000 audit: BPF prog-id=167 op=LOAD Jan 23 18:28:31.188000 audit[3468]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3370 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:31.205617 kernel: audit: type=1334 audit(1769192911.188:563): prog-id=167 op=LOAD Jan 23 18:28:31.205828 kernel: audit: type=1300 audit(1769192911.188:563): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3370 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:31.206162 kernel: audit: type=1327 audit(1769192911.188:563): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661393435323734326466383237623366383462636164643233623234 Jan 23 18:28:31.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661393435323734326466383237623366383462636164643233623234 Jan 23 18:28:31.189000 audit: BPF prog-id=167 op=UNLOAD Jan 23 18:28:31.189000 audit[3468]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3370 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:31.250691 kernel: audit: type=1334 audit(1769192911.189:564): prog-id=167 op=UNLOAD Jan 23 18:28:31.256287 kernel: audit: type=1300 audit(1769192911.189:564): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3370 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:31.256858 kernel: audit: type=1327 audit(1769192911.189:564): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661393435323734326466383237623366383462636164643233623234 Jan 23 18:28:31.189000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661393435323734326466383237623366383462636164643233623234 Jan 23 18:28:31.189000 audit: BPF prog-id=168 op=LOAD Jan 23 18:28:31.189000 audit[3468]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3370 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:31.284999 kernel: audit: type=1334 audit(1769192911.189:565): prog-id=168 op=LOAD Jan 23 18:28:31.285086 kernel: audit: type=1300 audit(1769192911.189:565): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3370 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:31.285116 kernel: audit: type=1327 audit(1769192911.189:565): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661393435323734326466383237623366383462636164643233623234 Jan 23 18:28:31.189000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661393435323734326466383237623366383462636164643233623234 Jan 23 18:28:31.189000 audit: BPF prog-id=169 op=LOAD Jan 23 18:28:31.189000 audit[3468]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3370 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:31.189000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661393435323734326466383237623366383462636164643233623234 Jan 23 18:28:31.189000 audit: BPF prog-id=169 op=UNLOAD Jan 23 18:28:31.189000 audit[3468]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3370 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:31.189000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661393435323734326466383237623366383462636164643233623234 Jan 23 18:28:31.189000 audit: BPF prog-id=168 op=UNLOAD Jan 23 18:28:31.189000 audit[3468]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3370 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:31.189000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661393435323734326466383237623366383462636164643233623234 Jan 23 18:28:31.189000 audit: BPF prog-id=170 op=LOAD Jan 23 18:28:31.189000 audit[3468]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3370 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:31.189000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661393435323734326466383237623366383462636164643233623234 Jan 23 18:28:31.353003 containerd[1632]: time="2026-01-23T18:28:31.348968674Z" level=info msg="StartContainer for \"6a9452742df827b3f84bcadd23b2482d58957681e4fcb72738c24954237d4eb3\" returns successfully" Jan 23 18:28:31.423681 kubelet[2842]: E0123 18:28:31.422711 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fbkc5" podUID="420164f1-10e4-4309-843a-9bf4c7513aff" Jan 23 18:28:31.702534 kubelet[2842]: E0123 18:28:31.702355 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:31.774524 kubelet[2842]: E0123 18:28:31.774118 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.774524 kubelet[2842]: W0123 18:28:31.774156 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.774524 kubelet[2842]: E0123 18:28:31.774190 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.777020 kubelet[2842]: E0123 18:28:31.776557 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.777020 kubelet[2842]: W0123 18:28:31.776583 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.777020 kubelet[2842]: E0123 18:28:31.776606 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.779040 kubelet[2842]: E0123 18:28:31.778943 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.779040 kubelet[2842]: W0123 18:28:31.779017 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.779163 kubelet[2842]: E0123 18:28:31.779061 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.781292 kubelet[2842]: E0123 18:28:31.781211 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.782718 kubelet[2842]: W0123 18:28:31.782512 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.783182 kubelet[2842]: E0123 18:28:31.782848 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.787113 kubelet[2842]: E0123 18:28:31.786565 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.787113 kubelet[2842]: W0123 18:28:31.786596 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.787113 kubelet[2842]: E0123 18:28:31.786784 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.789098 kubelet[2842]: I0123 18:28:31.788910 2842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-778fc47968-fp9lz" podStartSLOduration=1.612936751 podStartE2EDuration="4.788855081s" podCreationTimestamp="2026-01-23 18:28:27 +0000 UTC" firstStartedPulling="2026-01-23 18:28:27.835701873 +0000 UTC m=+33.796984876" lastFinishedPulling="2026-01-23 18:28:31.011620232 +0000 UTC m=+36.972903206" observedRunningTime="2026-01-23 18:28:31.728600131 +0000 UTC m=+37.689883114" watchObservedRunningTime="2026-01-23 18:28:31.788855081 +0000 UTC m=+37.750138064" Jan 23 18:28:31.792612 kubelet[2842]: E0123 18:28:31.792531 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.792612 kubelet[2842]: W0123 18:28:31.792583 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.792612 kubelet[2842]: E0123 18:28:31.792606 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.793374 kubelet[2842]: E0123 18:28:31.793288 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.793374 kubelet[2842]: W0123 18:28:31.793445 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.793374 kubelet[2842]: E0123 18:28:31.793465 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.794195 kubelet[2842]: E0123 18:28:31.794084 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.794195 kubelet[2842]: W0123 18:28:31.794135 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.794195 kubelet[2842]: E0123 18:28:31.794152 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.795574 kubelet[2842]: E0123 18:28:31.795545 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.795574 kubelet[2842]: W0123 18:28:31.795563 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.795574 kubelet[2842]: E0123 18:28:31.795577 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.796308 kubelet[2842]: E0123 18:28:31.795999 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.796308 kubelet[2842]: W0123 18:28:31.796016 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.796308 kubelet[2842]: E0123 18:28:31.796033 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.798376 kubelet[2842]: E0123 18:28:31.798206 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.798376 kubelet[2842]: W0123 18:28:31.798242 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.798376 kubelet[2842]: E0123 18:28:31.798277 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.800489 kubelet[2842]: E0123 18:28:31.800303 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.800489 kubelet[2842]: W0123 18:28:31.800327 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.800489 kubelet[2842]: E0123 18:28:31.800346 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.801729 kubelet[2842]: E0123 18:28:31.801709 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.801897 kubelet[2842]: W0123 18:28:31.801875 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.802338 kubelet[2842]: E0123 18:28:31.802169 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.805717 kubelet[2842]: E0123 18:28:31.805589 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.805717 kubelet[2842]: W0123 18:28:31.805614 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.805717 kubelet[2842]: E0123 18:28:31.805635 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.807501 kubelet[2842]: E0123 18:28:31.806587 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.807501 kubelet[2842]: W0123 18:28:31.806609 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.807501 kubelet[2842]: E0123 18:28:31.806625 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.836000 audit[3532]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3532 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:31.836000 audit[3532]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc33854b90 a2=0 a3=7ffc33854b7c items=0 ppid=3005 pid=3532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:31.836000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:31.848000 audit[3532]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3532 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:31.848000 audit[3532]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc33854b90 a2=0 a3=7ffc33854b7c items=0 ppid=3005 pid=3532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:31.848000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:31.874608 kubelet[2842]: E0123 18:28:31.874012 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.874608 kubelet[2842]: W0123 18:28:31.874484 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.874608 kubelet[2842]: E0123 18:28:31.874516 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.877873 kubelet[2842]: E0123 18:28:31.877793 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.878303 kubelet[2842]: W0123 18:28:31.878131 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.878303 kubelet[2842]: E0123 18:28:31.878154 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.880863 containerd[1632]: time="2026-01-23T18:28:31.880178445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:28:31.881302 kubelet[2842]: E0123 18:28:31.881072 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.881302 kubelet[2842]: W0123 18:28:31.881085 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.881302 kubelet[2842]: E0123 18:28:31.881101 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.883365 kubelet[2842]: E0123 18:28:31.883129 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.883365 kubelet[2842]: W0123 18:28:31.883336 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.883365 kubelet[2842]: E0123 18:28:31.883350 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.884312 containerd[1632]: time="2026-01-23T18:28:31.883358449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 23 18:28:31.885078 kubelet[2842]: E0123 18:28:31.884908 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.885415 kubelet[2842]: W0123 18:28:31.885253 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.885415 kubelet[2842]: E0123 18:28:31.885273 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.886472 containerd[1632]: time="2026-01-23T18:28:31.885674442Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:28:31.886669 kubelet[2842]: E0123 18:28:31.886652 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.886785 kubelet[2842]: W0123 18:28:31.886718 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.886785 kubelet[2842]: E0123 18:28:31.886781 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.888570 kubelet[2842]: E0123 18:28:31.887795 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.888570 kubelet[2842]: W0123 18:28:31.887815 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.888570 kubelet[2842]: E0123 18:28:31.887829 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.888570 kubelet[2842]: E0123 18:28:31.888138 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.888570 kubelet[2842]: W0123 18:28:31.888152 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.888570 kubelet[2842]: E0123 18:28:31.888164 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.889047 kubelet[2842]: E0123 18:28:31.889006 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.889167 kubelet[2842]: W0123 18:28:31.889050 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.889167 kubelet[2842]: E0123 18:28:31.889063 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.889964 kubelet[2842]: E0123 18:28:31.889918 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.889964 kubelet[2842]: W0123 18:28:31.889969 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.889964 kubelet[2842]: E0123 18:28:31.889984 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.890978 kubelet[2842]: E0123 18:28:31.890914 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.890978 kubelet[2842]: W0123 18:28:31.890961 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.890978 kubelet[2842]: E0123 18:28:31.890974 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.891686 kubelet[2842]: E0123 18:28:31.891657 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.891686 kubelet[2842]: W0123 18:28:31.891672 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.891686 kubelet[2842]: E0123 18:28:31.891682 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.893138 containerd[1632]: time="2026-01-23T18:28:31.893071330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:28:31.893589 kubelet[2842]: E0123 18:28:31.893496 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.893589 kubelet[2842]: W0123 18:28:31.893533 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.893589 kubelet[2842]: E0123 18:28:31.893547 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.895565 kubelet[2842]: E0123 18:28:31.895477 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.895565 kubelet[2842]: W0123 18:28:31.895516 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.895565 kubelet[2842]: E0123 18:28:31.895530 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.896724 kubelet[2842]: E0123 18:28:31.896687 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.896724 kubelet[2842]: W0123 18:28:31.896725 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.896724 kubelet[2842]: E0123 18:28:31.896782 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.897073 containerd[1632]: time="2026-01-23T18:28:31.896928312Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 884.749875ms" Jan 23 18:28:31.897073 containerd[1632]: time="2026-01-23T18:28:31.897011031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 18:28:31.899256 kubelet[2842]: E0123 18:28:31.899205 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.899256 kubelet[2842]: W0123 18:28:31.899251 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.899341 kubelet[2842]: E0123 18:28:31.899266 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.900586 kubelet[2842]: E0123 18:28:31.900531 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.900586 kubelet[2842]: W0123 18:28:31.900575 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.900586 kubelet[2842]: E0123 18:28:31.900589 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.901600 kubelet[2842]: E0123 18:28:31.901555 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:28:31.901600 kubelet[2842]: W0123 18:28:31.901599 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:28:31.901672 kubelet[2842]: E0123 18:28:31.901611 2842 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:28:31.907088 containerd[1632]: time="2026-01-23T18:28:31.907010149Z" level=info msg="CreateContainer within sandbox \"fdef5c9101188e7c002780fdd08331d5f17f3a2b42e85e1207be8787941c7f91\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 18:28:31.923802 containerd[1632]: time="2026-01-23T18:28:31.923706299Z" level=info msg="Container 90e4144fab3fb6753a881559fc0b244a64f067af86c146fdb5dad28143b8d75b: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:28:31.935935 containerd[1632]: time="2026-01-23T18:28:31.935861551Z" level=info msg="CreateContainer within sandbox \"fdef5c9101188e7c002780fdd08331d5f17f3a2b42e85e1207be8787941c7f91\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"90e4144fab3fb6753a881559fc0b244a64f067af86c146fdb5dad28143b8d75b\"" Jan 23 18:28:31.937371 containerd[1632]: time="2026-01-23T18:28:31.937248142Z" level=info msg="StartContainer for \"90e4144fab3fb6753a881559fc0b244a64f067af86c146fdb5dad28143b8d75b\"" Jan 23 18:28:31.940189 containerd[1632]: time="2026-01-23T18:28:31.940111685Z" level=info msg="connecting to shim 90e4144fab3fb6753a881559fc0b244a64f067af86c146fdb5dad28143b8d75b" address="unix:///run/containerd/s/eeb19d307cd62485163c76b3b8352ea3385e83ba942321cf68e467d6564cc38e" protocol=ttrpc version=3 Jan 23 18:28:32.000714 systemd[1]: Started cri-containerd-90e4144fab3fb6753a881559fc0b244a64f067af86c146fdb5dad28143b8d75b.scope - libcontainer container 90e4144fab3fb6753a881559fc0b244a64f067af86c146fdb5dad28143b8d75b. Jan 23 18:28:32.085000 audit: BPF prog-id=171 op=LOAD Jan 23 18:28:32.085000 audit[3551]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3407 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:32.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653431343466616233666236373533613838313535396663306232 Jan 23 18:28:32.085000 audit: BPF prog-id=172 op=LOAD Jan 23 18:28:32.085000 audit[3551]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3407 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:32.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653431343466616233666236373533613838313535396663306232 Jan 23 18:28:32.085000 audit: BPF prog-id=172 op=UNLOAD Jan 23 18:28:32.085000 audit[3551]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:32.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653431343466616233666236373533613838313535396663306232 Jan 23 18:28:32.085000 audit: BPF prog-id=171 op=UNLOAD Jan 23 18:28:32.085000 audit[3551]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:32.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653431343466616233666236373533613838313535396663306232 Jan 23 18:28:32.085000 audit: BPF prog-id=173 op=LOAD Jan 23 18:28:32.085000 audit[3551]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3407 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:32.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653431343466616233666236373533613838313535396663306232 Jan 23 18:28:32.123973 containerd[1632]: time="2026-01-23T18:28:32.123815268Z" level=info msg="StartContainer for \"90e4144fab3fb6753a881559fc0b244a64f067af86c146fdb5dad28143b8d75b\" returns successfully" Jan 23 18:28:32.162032 systemd[1]: cri-containerd-90e4144fab3fb6753a881559fc0b244a64f067af86c146fdb5dad28143b8d75b.scope: Deactivated successfully. Jan 23 18:28:32.165000 audit: BPF prog-id=173 op=UNLOAD Jan 23 18:28:32.167960 containerd[1632]: time="2026-01-23T18:28:32.167778492Z" level=info msg="received container exit event container_id:\"90e4144fab3fb6753a881559fc0b244a64f067af86c146fdb5dad28143b8d75b\" id:\"90e4144fab3fb6753a881559fc0b244a64f067af86c146fdb5dad28143b8d75b\" pid:3565 exited_at:{seconds:1769192912 nanos:166623198}" Jan 23 18:28:32.224494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90e4144fab3fb6753a881559fc0b244a64f067af86c146fdb5dad28143b8d75b-rootfs.mount: Deactivated successfully. Jan 23 18:28:32.705439 kubelet[2842]: E0123 18:28:32.705345 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:32.706765 kubelet[2842]: E0123 18:28:32.705903 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:32.707306 containerd[1632]: time="2026-01-23T18:28:32.707223486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 18:28:33.412156 kubelet[2842]: E0123 18:28:33.412057 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fbkc5" podUID="420164f1-10e4-4309-843a-9bf4c7513aff" Jan 23 18:28:33.708459 kubelet[2842]: E0123 18:28:33.708238 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:35.383747 containerd[1632]: time="2026-01-23T18:28:35.383363416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:28:35.386338 containerd[1632]: time="2026-01-23T18:28:35.386260929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 23 18:28:35.388771 containerd[1632]: time="2026-01-23T18:28:35.388621922Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:28:35.392706 containerd[1632]: time="2026-01-23T18:28:35.392635959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:28:35.393997 containerd[1632]: time="2026-01-23T18:28:35.393887538Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.686592622s" Jan 23 18:28:35.393997 containerd[1632]: time="2026-01-23T18:28:35.393975669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 18:28:35.400369 containerd[1632]: time="2026-01-23T18:28:35.400135801Z" level=info msg="CreateContainer within sandbox \"fdef5c9101188e7c002780fdd08331d5f17f3a2b42e85e1207be8787941c7f91\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 18:28:35.411780 kubelet[2842]: E0123 18:28:35.411634 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fbkc5" podUID="420164f1-10e4-4309-843a-9bf4c7513aff" Jan 23 18:28:35.422160 containerd[1632]: time="2026-01-23T18:28:35.422054553Z" level=info msg="Container 3626c838511d0384e842cb4ebcdd099a7370a953844f419784cc7275b3478947: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:28:35.435688 containerd[1632]: time="2026-01-23T18:28:35.435601587Z" level=info msg="CreateContainer within sandbox \"fdef5c9101188e7c002780fdd08331d5f17f3a2b42e85e1207be8787941c7f91\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3626c838511d0384e842cb4ebcdd099a7370a953844f419784cc7275b3478947\"" Jan 23 18:28:35.436922 containerd[1632]: time="2026-01-23T18:28:35.436838005Z" level=info msg="StartContainer for \"3626c838511d0384e842cb4ebcdd099a7370a953844f419784cc7275b3478947\"" Jan 23 18:28:35.439003 containerd[1632]: time="2026-01-23T18:28:35.438947971Z" level=info msg="connecting to shim 3626c838511d0384e842cb4ebcdd099a7370a953844f419784cc7275b3478947" address="unix:///run/containerd/s/eeb19d307cd62485163c76b3b8352ea3385e83ba942321cf68e467d6564cc38e" protocol=ttrpc version=3 Jan 23 18:28:35.496924 systemd[1]: Started cri-containerd-3626c838511d0384e842cb4ebcdd099a7370a953844f419784cc7275b3478947.scope - libcontainer container 3626c838511d0384e842cb4ebcdd099a7370a953844f419784cc7275b3478947. Jan 23 18:28:35.616000 audit: BPF prog-id=174 op=LOAD Jan 23 18:28:35.616000 audit[3613]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3407 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:35.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336323663383338353131643033383465383432636234656263646430 Jan 23 18:28:35.616000 audit: BPF prog-id=175 op=LOAD Jan 23 18:28:35.616000 audit[3613]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3407 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:35.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336323663383338353131643033383465383432636234656263646430 Jan 23 18:28:35.616000 audit: BPF prog-id=175 op=UNLOAD Jan 23 18:28:35.616000 audit[3613]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:35.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336323663383338353131643033383465383432636234656263646430 Jan 23 18:28:35.616000 audit: BPF prog-id=174 op=UNLOAD Jan 23 18:28:35.616000 audit[3613]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:35.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336323663383338353131643033383465383432636234656263646430 Jan 23 18:28:35.616000 audit: BPF prog-id=176 op=LOAD Jan 23 18:28:35.616000 audit[3613]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3407 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:35.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336323663383338353131643033383465383432636234656263646430 Jan 23 18:28:35.678108 containerd[1632]: time="2026-01-23T18:28:35.678032652Z" level=info msg="StartContainer for \"3626c838511d0384e842cb4ebcdd099a7370a953844f419784cc7275b3478947\" returns successfully" Jan 23 18:28:35.727779 kubelet[2842]: E0123 18:28:35.727721 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:36.729972 kubelet[2842]: E0123 18:28:36.729911 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:36.934766 systemd[1]: cri-containerd-3626c838511d0384e842cb4ebcdd099a7370a953844f419784cc7275b3478947.scope: Deactivated successfully. Jan 23 18:28:36.936528 systemd[1]: cri-containerd-3626c838511d0384e842cb4ebcdd099a7370a953844f419784cc7275b3478947.scope: Consumed 1.417s CPU time, 172.3M memory peak, 3.7M read from disk, 171.3M written to disk. Jan 23 18:28:36.940110 containerd[1632]: time="2026-01-23T18:28:36.939921129Z" level=info msg="received container exit event container_id:\"3626c838511d0384e842cb4ebcdd099a7370a953844f419784cc7275b3478947\" id:\"3626c838511d0384e842cb4ebcdd099a7370a953844f419784cc7275b3478947\" pid:3626 exited_at:{seconds:1769192916 nanos:937013808}" Jan 23 18:28:36.942000 audit: BPF prog-id=176 op=UNLOAD Jan 23 18:28:36.946699 kernel: kauditd_printk_skb: 49 callbacks suppressed Jan 23 18:28:36.946816 kernel: audit: type=1334 audit(1769192916.942:583): prog-id=176 op=UNLOAD Jan 23 18:28:37.002232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3626c838511d0384e842cb4ebcdd099a7370a953844f419784cc7275b3478947-rootfs.mount: Deactivated successfully. Jan 23 18:28:37.047774 kubelet[2842]: I0123 18:28:37.047605 2842 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 18:28:37.114547 systemd[1]: Created slice kubepods-burstable-pod06cbbdec_0484_47bb_b6b9_aae05580b8cd.slice - libcontainer container kubepods-burstable-pod06cbbdec_0484_47bb_b6b9_aae05580b8cd.slice. Jan 23 18:28:37.130678 systemd[1]: Created slice kubepods-burstable-pod308cfc11_36f8_46bc_bb62_85fc8219ee01.slice - libcontainer container kubepods-burstable-pod308cfc11_36f8_46bc_bb62_85fc8219ee01.slice. Jan 23 18:28:37.142767 systemd[1]: Created slice kubepods-besteffort-podfd8c84c1_3db5_46bc_b232_d92330035bbc.slice - libcontainer container kubepods-besteffort-podfd8c84c1_3db5_46bc_b232_d92330035bbc.slice. Jan 23 18:28:37.157126 systemd[1]: Created slice kubepods-besteffort-podf4756753_32cd_49e4_a9ac_3b64d97f5679.slice - libcontainer container kubepods-besteffort-podf4756753_32cd_49e4_a9ac_3b64d97f5679.slice. Jan 23 18:28:37.167288 systemd[1]: Created slice kubepods-besteffort-pod45c79f90_5bfc_4e7b_ac61_b9e42301e7a5.slice - libcontainer container kubepods-besteffort-pod45c79f90_5bfc_4e7b_ac61_b9e42301e7a5.slice. Jan 23 18:28:37.181741 systemd[1]: Created slice kubepods-besteffort-pod720b9cd4_1750_46fd_95a5_f9417f9523f5.slice - libcontainer container kubepods-besteffort-pod720b9cd4_1750_46fd_95a5_f9417f9523f5.slice. Jan 23 18:28:37.193265 systemd[1]: Created slice kubepods-besteffort-pod2f66ac8e_bae7_47e7_aa6e_37d83efdc54b.slice - libcontainer container kubepods-besteffort-pod2f66ac8e_bae7_47e7_aa6e_37d83efdc54b.slice. Jan 23 18:28:37.203184 systemd[1]: Created slice kubepods-besteffort-pod7bfb42fc_77fc_4491_a374_12534b8ba3b1.slice - libcontainer container kubepods-besteffort-pod7bfb42fc_77fc_4491_a374_12534b8ba3b1.slice. Jan 23 18:28:37.267580 kubelet[2842]: I0123 18:28:37.267224 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm6nh\" (UniqueName: \"kubernetes.io/projected/2f66ac8e-bae7-47e7-aa6e-37d83efdc54b-kube-api-access-gm6nh\") pod \"whisker-85c584556b-w2wzd\" (UID: \"2f66ac8e-bae7-47e7-aa6e-37d83efdc54b\") " pod="calico-system/whisker-85c584556b-w2wzd" Jan 23 18:28:37.267580 kubelet[2842]: I0123 18:28:37.267310 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbrtc\" (UniqueName: \"kubernetes.io/projected/720b9cd4-1750-46fd-95a5-f9417f9523f5-kube-api-access-sbrtc\") pod \"calico-kube-controllers-f89f6994b-gxllw\" (UID: \"720b9cd4-1750-46fd-95a5-f9417f9523f5\") " pod="calico-system/calico-kube-controllers-f89f6994b-gxllw" Jan 23 18:28:37.267580 kubelet[2842]: I0123 18:28:37.267347 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/45c79f90-5bfc-4e7b-ac61-b9e42301e7a5-calico-apiserver-certs\") pod \"calico-apiserver-6469486c9-vgp5c\" (UID: \"45c79f90-5bfc-4e7b-ac61-b9e42301e7a5\") " pod="calico-apiserver/calico-apiserver-6469486c9-vgp5c" Jan 23 18:28:37.269526 kubelet[2842]: I0123 18:28:37.268620 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qldmx\" (UniqueName: \"kubernetes.io/projected/45c79f90-5bfc-4e7b-ac61-b9e42301e7a5-kube-api-access-qldmx\") pod \"calico-apiserver-6469486c9-vgp5c\" (UID: \"45c79f90-5bfc-4e7b-ac61-b9e42301e7a5\") " pod="calico-apiserver/calico-apiserver-6469486c9-vgp5c" Jan 23 18:28:37.269526 kubelet[2842]: I0123 18:28:37.268754 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2f66ac8e-bae7-47e7-aa6e-37d83efdc54b-whisker-backend-key-pair\") pod \"whisker-85c584556b-w2wzd\" (UID: \"2f66ac8e-bae7-47e7-aa6e-37d83efdc54b\") " pod="calico-system/whisker-85c584556b-w2wzd" Jan 23 18:28:37.269526 kubelet[2842]: I0123 18:28:37.268809 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f66ac8e-bae7-47e7-aa6e-37d83efdc54b-whisker-ca-bundle\") pod \"whisker-85c584556b-w2wzd\" (UID: \"2f66ac8e-bae7-47e7-aa6e-37d83efdc54b\") " pod="calico-system/whisker-85c584556b-w2wzd" Jan 23 18:28:37.269526 kubelet[2842]: I0123 18:28:37.268854 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7bfb42fc-77fc-4491-a374-12534b8ba3b1-calico-apiserver-certs\") pod \"calico-apiserver-5f6cd769bc-pxzdx\" (UID: \"7bfb42fc-77fc-4491-a374-12534b8ba3b1\") " pod="calico-apiserver/calico-apiserver-5f6cd769bc-pxzdx" Jan 23 18:28:37.269526 kubelet[2842]: I0123 18:28:37.268896 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jztrv\" (UniqueName: \"kubernetes.io/projected/fd8c84c1-3db5-46bc-b232-d92330035bbc-kube-api-access-jztrv\") pod \"calico-apiserver-6469486c9-qrngm\" (UID: \"fd8c84c1-3db5-46bc-b232-d92330035bbc\") " pod="calico-apiserver/calico-apiserver-6469486c9-qrngm" Jan 23 18:28:37.269772 kubelet[2842]: I0123 18:28:37.268951 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/308cfc11-36f8-46bc-bb62-85fc8219ee01-config-volume\") pod \"coredns-674b8bbfcf-4j94s\" (UID: \"308cfc11-36f8-46bc-bb62-85fc8219ee01\") " pod="kube-system/coredns-674b8bbfcf-4j94s" Jan 23 18:28:37.269772 kubelet[2842]: I0123 18:28:37.268996 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6n8w\" (UniqueName: \"kubernetes.io/projected/308cfc11-36f8-46bc-bb62-85fc8219ee01-kube-api-access-x6n8w\") pod \"coredns-674b8bbfcf-4j94s\" (UID: \"308cfc11-36f8-46bc-bb62-85fc8219ee01\") " pod="kube-system/coredns-674b8bbfcf-4j94s" Jan 23 18:28:37.269772 kubelet[2842]: I0123 18:28:37.269038 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06cbbdec-0484-47bb-b6b9-aae05580b8cd-config-volume\") pod \"coredns-674b8bbfcf-x2dz4\" (UID: \"06cbbdec-0484-47bb-b6b9-aae05580b8cd\") " pod="kube-system/coredns-674b8bbfcf-x2dz4" Jan 23 18:28:37.269772 kubelet[2842]: I0123 18:28:37.269075 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8s7r\" (UniqueName: \"kubernetes.io/projected/7bfb42fc-77fc-4491-a374-12534b8ba3b1-kube-api-access-g8s7r\") pod \"calico-apiserver-5f6cd769bc-pxzdx\" (UID: \"7bfb42fc-77fc-4491-a374-12534b8ba3b1\") " pod="calico-apiserver/calico-apiserver-5f6cd769bc-pxzdx" Jan 23 18:28:37.269772 kubelet[2842]: I0123 18:28:37.269112 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/720b9cd4-1750-46fd-95a5-f9417f9523f5-tigera-ca-bundle\") pod \"calico-kube-controllers-f89f6994b-gxllw\" (UID: \"720b9cd4-1750-46fd-95a5-f9417f9523f5\") " pod="calico-system/calico-kube-controllers-f89f6994b-gxllw" Jan 23 18:28:37.270025 kubelet[2842]: I0123 18:28:37.269179 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fd8c84c1-3db5-46bc-b232-d92330035bbc-calico-apiserver-certs\") pod \"calico-apiserver-6469486c9-qrngm\" (UID: \"fd8c84c1-3db5-46bc-b232-d92330035bbc\") " pod="calico-apiserver/calico-apiserver-6469486c9-qrngm" Jan 23 18:28:37.270025 kubelet[2842]: I0123 18:28:37.269211 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bpvq\" (UniqueName: \"kubernetes.io/projected/f4756753-32cd-49e4-a9ac-3b64d97f5679-kube-api-access-6bpvq\") pod \"goldmane-666569f655-qrzq4\" (UID: \"f4756753-32cd-49e4-a9ac-3b64d97f5679\") " pod="calico-system/goldmane-666569f655-qrzq4" Jan 23 18:28:37.270025 kubelet[2842]: I0123 18:28:37.269257 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4756753-32cd-49e4-a9ac-3b64d97f5679-config\") pod \"goldmane-666569f655-qrzq4\" (UID: \"f4756753-32cd-49e4-a9ac-3b64d97f5679\") " pod="calico-system/goldmane-666569f655-qrzq4" Jan 23 18:28:37.270025 kubelet[2842]: I0123 18:28:37.269290 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4756753-32cd-49e4-a9ac-3b64d97f5679-goldmane-ca-bundle\") pod \"goldmane-666569f655-qrzq4\" (UID: \"f4756753-32cd-49e4-a9ac-3b64d97f5679\") " pod="calico-system/goldmane-666569f655-qrzq4" Jan 23 18:28:37.270025 kubelet[2842]: I0123 18:28:37.269312 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f4756753-32cd-49e4-a9ac-3b64d97f5679-goldmane-key-pair\") pod \"goldmane-666569f655-qrzq4\" (UID: \"f4756753-32cd-49e4-a9ac-3b64d97f5679\") " pod="calico-system/goldmane-666569f655-qrzq4" Jan 23 18:28:37.270196 kubelet[2842]: I0123 18:28:37.269351 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpkgf\" (UniqueName: \"kubernetes.io/projected/06cbbdec-0484-47bb-b6b9-aae05580b8cd-kube-api-access-vpkgf\") pod \"coredns-674b8bbfcf-x2dz4\" (UID: \"06cbbdec-0484-47bb-b6b9-aae05580b8cd\") " pod="kube-system/coredns-674b8bbfcf-x2dz4" Jan 23 18:28:37.426130 kubelet[2842]: E0123 18:28:37.426091 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:37.433098 containerd[1632]: time="2026-01-23T18:28:37.432970870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x2dz4,Uid:06cbbdec-0484-47bb-b6b9-aae05580b8cd,Namespace:kube-system,Attempt:0,}" Jan 23 18:28:37.437910 kubelet[2842]: E0123 18:28:37.437880 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:37.442150 containerd[1632]: time="2026-01-23T18:28:37.442060880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4j94s,Uid:308cfc11-36f8-46bc-bb62-85fc8219ee01,Namespace:kube-system,Attempt:0,}" Jan 23 18:28:37.443644 systemd[1]: Created slice kubepods-besteffort-pod420164f1_10e4_4309_843a_9bf4c7513aff.slice - libcontainer container kubepods-besteffort-pod420164f1_10e4_4309_843a_9bf4c7513aff.slice. Jan 23 18:28:37.451719 containerd[1632]: time="2026-01-23T18:28:37.451615221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6469486c9-qrngm,Uid:fd8c84c1-3db5-46bc-b232-d92330035bbc,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:28:37.456085 containerd[1632]: time="2026-01-23T18:28:37.455991666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fbkc5,Uid:420164f1-10e4-4309-843a-9bf4c7513aff,Namespace:calico-system,Attempt:0,}" Jan 23 18:28:37.467884 containerd[1632]: time="2026-01-23T18:28:37.467740670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qrzq4,Uid:f4756753-32cd-49e4-a9ac-3b64d97f5679,Namespace:calico-system,Attempt:0,}" Jan 23 18:28:37.481286 containerd[1632]: time="2026-01-23T18:28:37.481118826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6469486c9-vgp5c,Uid:45c79f90-5bfc-4e7b-ac61-b9e42301e7a5,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:28:37.537576 containerd[1632]: time="2026-01-23T18:28:37.537136197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f6cd769bc-pxzdx,Uid:7bfb42fc-77fc-4491-a374-12534b8ba3b1,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:28:37.575353 containerd[1632]: time="2026-01-23T18:28:37.575187603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f89f6994b-gxllw,Uid:720b9cd4-1750-46fd-95a5-f9417f9523f5,Namespace:calico-system,Attempt:0,}" Jan 23 18:28:37.584815 containerd[1632]: time="2026-01-23T18:28:37.584732103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85c584556b-w2wzd,Uid:2f66ac8e-bae7-47e7-aa6e-37d83efdc54b,Namespace:calico-system,Attempt:0,}" Jan 23 18:28:37.743135 kubelet[2842]: E0123 18:28:37.743088 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:37.754520 containerd[1632]: time="2026-01-23T18:28:37.754288533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 18:28:37.861959 containerd[1632]: time="2026-01-23T18:28:37.861005698Z" level=error msg="Failed to destroy network for sandbox \"97b17c3747a8144bddc67c4e1c21ae5ed72b04d6b11f0fcf27838caf4a30e12b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.876680 containerd[1632]: time="2026-01-23T18:28:37.876525537Z" level=error msg="Failed to destroy network for sandbox \"875eb0f913bd8a27006968bc46c1612d59b1cd3357bcbb22aa67c09c65b44bd6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.877272 containerd[1632]: time="2026-01-23T18:28:37.877104452Z" level=error msg="Failed to destroy network for sandbox \"76bbe1a60ded99c5fd328d522eef817159cd87b52468c40db46d60fb26cc15c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.879174 containerd[1632]: time="2026-01-23T18:28:37.879040741Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x2dz4,Uid:06cbbdec-0484-47bb-b6b9-aae05580b8cd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b17c3747a8144bddc67c4e1c21ae5ed72b04d6b11f0fcf27838caf4a30e12b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.881343 kubelet[2842]: E0123 18:28:37.879719 2842 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b17c3747a8144bddc67c4e1c21ae5ed72b04d6b11f0fcf27838caf4a30e12b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.881343 kubelet[2842]: E0123 18:28:37.879836 2842 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b17c3747a8144bddc67c4e1c21ae5ed72b04d6b11f0fcf27838caf4a30e12b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-x2dz4" Jan 23 18:28:37.881343 kubelet[2842]: E0123 18:28:37.879953 2842 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b17c3747a8144bddc67c4e1c21ae5ed72b04d6b11f0fcf27838caf4a30e12b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-x2dz4" Jan 23 18:28:37.881655 kubelet[2842]: E0123 18:28:37.880087 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-x2dz4_kube-system(06cbbdec-0484-47bb-b6b9-aae05580b8cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-x2dz4_kube-system(06cbbdec-0484-47bb-b6b9-aae05580b8cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97b17c3747a8144bddc67c4e1c21ae5ed72b04d6b11f0fcf27838caf4a30e12b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-x2dz4" podUID="06cbbdec-0484-47bb-b6b9-aae05580b8cd" Jan 23 18:28:37.895809 containerd[1632]: time="2026-01-23T18:28:37.895663252Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qrzq4,Uid:f4756753-32cd-49e4-a9ac-3b64d97f5679,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"875eb0f913bd8a27006968bc46c1612d59b1cd3357bcbb22aa67c09c65b44bd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.897869 kubelet[2842]: E0123 18:28:37.897816 2842 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"875eb0f913bd8a27006968bc46c1612d59b1cd3357bcbb22aa67c09c65b44bd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.898776 kubelet[2842]: E0123 18:28:37.898526 2842 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"875eb0f913bd8a27006968bc46c1612d59b1cd3357bcbb22aa67c09c65b44bd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-qrzq4" Jan 23 18:28:37.898776 kubelet[2842]: E0123 18:28:37.898635 2842 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"875eb0f913bd8a27006968bc46c1612d59b1cd3357bcbb22aa67c09c65b44bd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-qrzq4" Jan 23 18:28:37.899940 kubelet[2842]: E0123 18:28:37.899611 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-qrzq4_calico-system(f4756753-32cd-49e4-a9ac-3b64d97f5679)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-qrzq4_calico-system(f4756753-32cd-49e4-a9ac-3b64d97f5679)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"875eb0f913bd8a27006968bc46c1612d59b1cd3357bcbb22aa67c09c65b44bd6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-qrzq4" podUID="f4756753-32cd-49e4-a9ac-3b64d97f5679" Jan 23 18:28:37.900363 containerd[1632]: time="2026-01-23T18:28:37.899980488Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f6cd769bc-pxzdx,Uid:7bfb42fc-77fc-4491-a374-12534b8ba3b1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"76bbe1a60ded99c5fd328d522eef817159cd87b52468c40db46d60fb26cc15c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.901168 kubelet[2842]: E0123 18:28:37.901133 2842 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76bbe1a60ded99c5fd328d522eef817159cd87b52468c40db46d60fb26cc15c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.901335 kubelet[2842]: E0123 18:28:37.901306 2842 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76bbe1a60ded99c5fd328d522eef817159cd87b52468c40db46d60fb26cc15c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f6cd769bc-pxzdx" Jan 23 18:28:37.901709 kubelet[2842]: E0123 18:28:37.901679 2842 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76bbe1a60ded99c5fd328d522eef817159cd87b52468c40db46d60fb26cc15c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f6cd769bc-pxzdx" Jan 23 18:28:37.901875 kubelet[2842]: E0123 18:28:37.901841 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f6cd769bc-pxzdx_calico-apiserver(7bfb42fc-77fc-4491-a374-12534b8ba3b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f6cd769bc-pxzdx_calico-apiserver(7bfb42fc-77fc-4491-a374-12534b8ba3b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76bbe1a60ded99c5fd328d522eef817159cd87b52468c40db46d60fb26cc15c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f6cd769bc-pxzdx" podUID="7bfb42fc-77fc-4491-a374-12534b8ba3b1" Jan 23 18:28:37.914092 containerd[1632]: time="2026-01-23T18:28:37.913886362Z" level=error msg="Failed to destroy network for sandbox \"5372a2f45f67c6b95e4a77f3d946408c68373bd09228c2121ee052295a977c88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.918936 containerd[1632]: time="2026-01-23T18:28:37.918899830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6469486c9-vgp5c,Uid:45c79f90-5bfc-4e7b-ac61-b9e42301e7a5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5372a2f45f67c6b95e4a77f3d946408c68373bd09228c2121ee052295a977c88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.921128 kubelet[2842]: E0123 18:28:37.920518 2842 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5372a2f45f67c6b95e4a77f3d946408c68373bd09228c2121ee052295a977c88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.921128 kubelet[2842]: E0123 18:28:37.920578 2842 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5372a2f45f67c6b95e4a77f3d946408c68373bd09228c2121ee052295a977c88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6469486c9-vgp5c" Jan 23 18:28:37.921128 kubelet[2842]: E0123 18:28:37.920597 2842 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5372a2f45f67c6b95e4a77f3d946408c68373bd09228c2121ee052295a977c88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6469486c9-vgp5c" Jan 23 18:28:37.921308 kubelet[2842]: E0123 18:28:37.921068 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6469486c9-vgp5c_calico-apiserver(45c79f90-5bfc-4e7b-ac61-b9e42301e7a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6469486c9-vgp5c_calico-apiserver(45c79f90-5bfc-4e7b-ac61-b9e42301e7a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5372a2f45f67c6b95e4a77f3d946408c68373bd09228c2121ee052295a977c88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6469486c9-vgp5c" podUID="45c79f90-5bfc-4e7b-ac61-b9e42301e7a5" Jan 23 18:28:37.922533 containerd[1632]: time="2026-01-23T18:28:37.922350221Z" level=error msg="Failed to destroy network for sandbox \"43a8f6b8405cae59ddda6fcbc1d2efe7a784a654562c3818db3d7cd6f9d758bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.927547 containerd[1632]: time="2026-01-23T18:28:37.927309683Z" level=error msg="Failed to destroy network for sandbox \"45b7af664e7c0b33d98c2377ed69f19f886919677a405ce051b0be4e7064b690\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.931150 containerd[1632]: time="2026-01-23T18:28:37.930975197Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6469486c9-qrngm,Uid:fd8c84c1-3db5-46bc-b232-d92330035bbc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"43a8f6b8405cae59ddda6fcbc1d2efe7a784a654562c3818db3d7cd6f9d758bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.931739 kubelet[2842]: E0123 18:28:37.931210 2842 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43a8f6b8405cae59ddda6fcbc1d2efe7a784a654562c3818db3d7cd6f9d758bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.931739 kubelet[2842]: E0123 18:28:37.931261 2842 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43a8f6b8405cae59ddda6fcbc1d2efe7a784a654562c3818db3d7cd6f9d758bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6469486c9-qrngm" Jan 23 18:28:37.931739 kubelet[2842]: E0123 18:28:37.931319 2842 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43a8f6b8405cae59ddda6fcbc1d2efe7a784a654562c3818db3d7cd6f9d758bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6469486c9-qrngm" Jan 23 18:28:37.932844 kubelet[2842]: E0123 18:28:37.931366 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6469486c9-qrngm_calico-apiserver(fd8c84c1-3db5-46bc-b232-d92330035bbc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6469486c9-qrngm_calico-apiserver(fd8c84c1-3db5-46bc-b232-d92330035bbc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43a8f6b8405cae59ddda6fcbc1d2efe7a784a654562c3818db3d7cd6f9d758bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6469486c9-qrngm" podUID="fd8c84c1-3db5-46bc-b232-d92330035bbc" Jan 23 18:28:37.934572 containerd[1632]: time="2026-01-23T18:28:37.934525210Z" level=error msg="Failed to destroy network for sandbox \"e1ceb67b6bdd11a73f3898d6c746c527f195be830301af2fccdb44b970f761e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.935626 containerd[1632]: time="2026-01-23T18:28:37.935165496Z" level=error msg="Failed to destroy network for sandbox \"0eee7f9b768c4ac2734f5e5a28c4fc3646b388d120f16fc9a24438a06a8db2d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.937062 containerd[1632]: time="2026-01-23T18:28:37.936864908Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f89f6994b-gxllw,Uid:720b9cd4-1750-46fd-95a5-f9417f9523f5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"45b7af664e7c0b33d98c2377ed69f19f886919677a405ce051b0be4e7064b690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.937709 kubelet[2842]: E0123 18:28:37.937306 2842 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45b7af664e7c0b33d98c2377ed69f19f886919677a405ce051b0be4e7064b690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.937709 kubelet[2842]: E0123 18:28:37.937480 2842 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45b7af664e7c0b33d98c2377ed69f19f886919677a405ce051b0be4e7064b690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f89f6994b-gxllw" Jan 23 18:28:37.937709 kubelet[2842]: E0123 18:28:37.937504 2842 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45b7af664e7c0b33d98c2377ed69f19f886919677a405ce051b0be4e7064b690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f89f6994b-gxllw" Jan 23 18:28:37.938351 kubelet[2842]: E0123 18:28:37.937549 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f89f6994b-gxllw_calico-system(720b9cd4-1750-46fd-95a5-f9417f9523f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f89f6994b-gxllw_calico-system(720b9cd4-1750-46fd-95a5-f9417f9523f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45b7af664e7c0b33d98c2377ed69f19f886919677a405ce051b0be4e7064b690\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f89f6994b-gxllw" podUID="720b9cd4-1750-46fd-95a5-f9417f9523f5" Jan 23 18:28:37.941588 containerd[1632]: time="2026-01-23T18:28:37.941360134Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fbkc5,Uid:420164f1-10e4-4309-843a-9bf4c7513aff,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1ceb67b6bdd11a73f3898d6c746c527f195be830301af2fccdb44b970f761e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.943127 containerd[1632]: time="2026-01-23T18:28:37.942089809Z" level=error msg="Failed to destroy network for sandbox \"78606e531c889285ef4095bde318a13c4d1c91e86d005d85a9740151153055a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.943195 kubelet[2842]: E0123 18:28:37.942627 2842 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1ceb67b6bdd11a73f3898d6c746c527f195be830301af2fccdb44b970f761e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.943195 kubelet[2842]: E0123 18:28:37.942678 2842 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1ceb67b6bdd11a73f3898d6c746c527f195be830301af2fccdb44b970f761e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fbkc5" Jan 23 18:28:37.943195 kubelet[2842]: E0123 18:28:37.942706 2842 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1ceb67b6bdd11a73f3898d6c746c527f195be830301af2fccdb44b970f761e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fbkc5" Jan 23 18:28:37.943337 kubelet[2842]: E0123 18:28:37.942757 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fbkc5_calico-system(420164f1-10e4-4309-843a-9bf4c7513aff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fbkc5_calico-system(420164f1-10e4-4309-843a-9bf4c7513aff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1ceb67b6bdd11a73f3898d6c746c527f195be830301af2fccdb44b970f761e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fbkc5" podUID="420164f1-10e4-4309-843a-9bf4c7513aff" Jan 23 18:28:37.944368 containerd[1632]: time="2026-01-23T18:28:37.944311623Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4j94s,Uid:308cfc11-36f8-46bc-bb62-85fc8219ee01,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0eee7f9b768c4ac2734f5e5a28c4fc3646b388d120f16fc9a24438a06a8db2d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.944609 kubelet[2842]: E0123 18:28:37.944577 2842 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0eee7f9b768c4ac2734f5e5a28c4fc3646b388d120f16fc9a24438a06a8db2d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.944649 kubelet[2842]: E0123 18:28:37.944624 2842 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0eee7f9b768c4ac2734f5e5a28c4fc3646b388d120f16fc9a24438a06a8db2d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4j94s" Jan 23 18:28:37.944733 kubelet[2842]: E0123 18:28:37.944653 2842 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0eee7f9b768c4ac2734f5e5a28c4fc3646b388d120f16fc9a24438a06a8db2d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4j94s" Jan 23 18:28:37.944761 kubelet[2842]: E0123 18:28:37.944721 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-4j94s_kube-system(308cfc11-36f8-46bc-bb62-85fc8219ee01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-4j94s_kube-system(308cfc11-36f8-46bc-bb62-85fc8219ee01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0eee7f9b768c4ac2734f5e5a28c4fc3646b388d120f16fc9a24438a06a8db2d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4j94s" podUID="308cfc11-36f8-46bc-bb62-85fc8219ee01" Jan 23 18:28:37.947543 containerd[1632]: time="2026-01-23T18:28:37.947477701Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85c584556b-w2wzd,Uid:2f66ac8e-bae7-47e7-aa6e-37d83efdc54b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"78606e531c889285ef4095bde318a13c4d1c91e86d005d85a9740151153055a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.947723 kubelet[2842]: E0123 18:28:37.947623 2842 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78606e531c889285ef4095bde318a13c4d1c91e86d005d85a9740151153055a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:28:37.947723 kubelet[2842]: E0123 18:28:37.947656 2842 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78606e531c889285ef4095bde318a13c4d1c91e86d005d85a9740151153055a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-85c584556b-w2wzd" Jan 23 18:28:37.947883 kubelet[2842]: E0123 18:28:37.947738 2842 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78606e531c889285ef4095bde318a13c4d1c91e86d005d85a9740151153055a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-85c584556b-w2wzd" Jan 23 18:28:37.947883 kubelet[2842]: E0123 18:28:37.947769 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-85c584556b-w2wzd_calico-system(2f66ac8e-bae7-47e7-aa6e-37d83efdc54b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-85c584556b-w2wzd_calico-system(2f66ac8e-bae7-47e7-aa6e-37d83efdc54b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78606e531c889285ef4095bde318a13c4d1c91e86d005d85a9740151153055a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-85c584556b-w2wzd" podUID="2f66ac8e-bae7-47e7-aa6e-37d83efdc54b" Jan 23 18:28:46.613295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2776735883.mount: Deactivated successfully. Jan 23 18:28:46.678119 containerd[1632]: time="2026-01-23T18:28:46.677960968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:28:46.702244 containerd[1632]: time="2026-01-23T18:28:46.679462976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 23 18:28:46.702244 containerd[1632]: time="2026-01-23T18:28:46.681617472Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:28:46.702643 containerd[1632]: time="2026-01-23T18:28:46.685329360Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.930686041s" Jan 23 18:28:46.702643 containerd[1632]: time="2026-01-23T18:28:46.702492507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 18:28:46.703171 containerd[1632]: time="2026-01-23T18:28:46.702923948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:28:46.739583 containerd[1632]: time="2026-01-23T18:28:46.739532145Z" level=info msg="CreateContainer within sandbox \"fdef5c9101188e7c002780fdd08331d5f17f3a2b42e85e1207be8787941c7f91\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 18:28:46.762477 containerd[1632]: time="2026-01-23T18:28:46.761503902Z" level=info msg="Container f47815446595c7c65a458ad3208683095aad35331c46ba6a438c37e370d31438: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:28:46.767664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2645315807.mount: Deactivated successfully. Jan 23 18:28:46.781038 containerd[1632]: time="2026-01-23T18:28:46.780921902Z" level=info msg="CreateContainer within sandbox \"fdef5c9101188e7c002780fdd08331d5f17f3a2b42e85e1207be8787941c7f91\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f47815446595c7c65a458ad3208683095aad35331c46ba6a438c37e370d31438\"" Jan 23 18:28:46.782204 containerd[1632]: time="2026-01-23T18:28:46.782088431Z" level=info msg="StartContainer for \"f47815446595c7c65a458ad3208683095aad35331c46ba6a438c37e370d31438\"" Jan 23 18:28:46.784116 containerd[1632]: time="2026-01-23T18:28:46.783981707Z" level=info msg="connecting to shim f47815446595c7c65a458ad3208683095aad35331c46ba6a438c37e370d31438" address="unix:///run/containerd/s/eeb19d307cd62485163c76b3b8352ea3385e83ba942321cf68e467d6564cc38e" protocol=ttrpc version=3 Jan 23 18:28:46.906740 systemd[1]: Started cri-containerd-f47815446595c7c65a458ad3208683095aad35331c46ba6a438c37e370d31438.scope - libcontainer container f47815446595c7c65a458ad3208683095aad35331c46ba6a438c37e370d31438. Jan 23 18:28:47.072552 kernel: audit: type=1334 audit(1769192927.066:584): prog-id=177 op=LOAD Jan 23 18:28:47.072880 kernel: audit: type=1300 audit(1769192927.066:584): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3407 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:47.066000 audit: BPF prog-id=177 op=LOAD Jan 23 18:28:47.066000 audit[3974]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3407 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:47.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634373831353434363539356337633635613435386164333230383638 Jan 23 18:28:47.105525 kernel: audit: type=1327 audit(1769192927.066:584): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634373831353434363539356337633635613435386164333230383638 Jan 23 18:28:47.105907 kernel: audit: type=1334 audit(1769192927.066:585): prog-id=178 op=LOAD Jan 23 18:28:47.066000 audit: BPF prog-id=178 op=LOAD Jan 23 18:28:47.066000 audit[3974]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3407 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:47.124845 kernel: audit: type=1300 audit(1769192927.066:585): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3407 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:47.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634373831353434363539356337633635613435386164333230383638 Jan 23 18:28:47.152559 kernel: audit: type=1327 audit(1769192927.066:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634373831353434363539356337633635613435386164333230383638 Jan 23 18:28:47.167490 kernel: audit: type=1334 audit(1769192927.066:586): prog-id=178 op=UNLOAD Jan 23 18:28:47.168056 kernel: audit: type=1300 audit(1769192927.066:586): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:47.066000 audit: BPF prog-id=178 op=UNLOAD Jan 23 18:28:47.173243 kernel: audit: type=1327 audit(1769192927.066:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634373831353434363539356337633635613435386164333230383638 Jan 23 18:28:47.066000 audit[3974]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:47.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634373831353434363539356337633635613435386164333230383638 Jan 23 18:28:47.202030 kernel: audit: type=1334 audit(1769192927.066:587): prog-id=177 op=UNLOAD Jan 23 18:28:47.066000 audit: BPF prog-id=177 op=UNLOAD Jan 23 18:28:47.066000 audit[3974]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:47.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634373831353434363539356337633635613435386164333230383638 Jan 23 18:28:47.066000 audit: BPF prog-id=179 op=LOAD Jan 23 18:28:47.066000 audit[3974]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3407 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:47.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634373831353434363539356337633635613435386164333230383638 Jan 23 18:28:47.260586 containerd[1632]: time="2026-01-23T18:28:47.260196017Z" level=info msg="StartContainer for \"f47815446595c7c65a458ad3208683095aad35331c46ba6a438c37e370d31438\" returns successfully" Jan 23 18:28:47.631748 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 18:28:47.632041 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 18:28:47.974348 kubelet[2842]: E0123 18:28:47.974308 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:48.005921 kubelet[2842]: I0123 18:28:48.005771 2842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lwh25" podStartSLOduration=2.195217684 podStartE2EDuration="21.005746274s" podCreationTimestamp="2026-01-23 18:28:27 +0000 UTC" firstStartedPulling="2026-01-23 18:28:27.895354014 +0000 UTC m=+33.856636997" lastFinishedPulling="2026-01-23 18:28:46.705882604 +0000 UTC m=+52.667165587" observedRunningTime="2026-01-23 18:28:48.00269861 +0000 UTC m=+53.963981623" watchObservedRunningTime="2026-01-23 18:28:48.005746274 +0000 UTC m=+53.967029258" Jan 23 18:28:48.131483 kubelet[2842]: I0123 18:28:48.131246 2842 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm6nh\" (UniqueName: \"kubernetes.io/projected/2f66ac8e-bae7-47e7-aa6e-37d83efdc54b-kube-api-access-gm6nh\") pod \"2f66ac8e-bae7-47e7-aa6e-37d83efdc54b\" (UID: \"2f66ac8e-bae7-47e7-aa6e-37d83efdc54b\") " Jan 23 18:28:48.131483 kubelet[2842]: I0123 18:28:48.131364 2842 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2f66ac8e-bae7-47e7-aa6e-37d83efdc54b-whisker-backend-key-pair\") pod \"2f66ac8e-bae7-47e7-aa6e-37d83efdc54b\" (UID: \"2f66ac8e-bae7-47e7-aa6e-37d83efdc54b\") " Jan 23 18:28:48.131728 kubelet[2842]: I0123 18:28:48.131544 2842 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f66ac8e-bae7-47e7-aa6e-37d83efdc54b-whisker-ca-bundle\") pod \"2f66ac8e-bae7-47e7-aa6e-37d83efdc54b\" (UID: \"2f66ac8e-bae7-47e7-aa6e-37d83efdc54b\") " Jan 23 18:28:48.134549 kubelet[2842]: I0123 18:28:48.134329 2842 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f66ac8e-bae7-47e7-aa6e-37d83efdc54b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2f66ac8e-bae7-47e7-aa6e-37d83efdc54b" (UID: "2f66ac8e-bae7-47e7-aa6e-37d83efdc54b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 18:28:48.146916 systemd[1]: var-lib-kubelet-pods-2f66ac8e\x2dbae7\x2d47e7\x2daa6e\x2d37d83efdc54b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 18:28:48.150758 kubelet[2842]: I0123 18:28:48.150060 2842 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f66ac8e-bae7-47e7-aa6e-37d83efdc54b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2f66ac8e-bae7-47e7-aa6e-37d83efdc54b" (UID: "2f66ac8e-bae7-47e7-aa6e-37d83efdc54b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 18:28:48.163559 kubelet[2842]: I0123 18:28:48.163250 2842 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f66ac8e-bae7-47e7-aa6e-37d83efdc54b-kube-api-access-gm6nh" (OuterVolumeSpecName: "kube-api-access-gm6nh") pod "2f66ac8e-bae7-47e7-aa6e-37d83efdc54b" (UID: "2f66ac8e-bae7-47e7-aa6e-37d83efdc54b"). InnerVolumeSpecName "kube-api-access-gm6nh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 18:28:48.163989 systemd[1]: var-lib-kubelet-pods-2f66ac8e\x2dbae7\x2d47e7\x2daa6e\x2d37d83efdc54b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgm6nh.mount: Deactivated successfully. Jan 23 18:28:48.233589 kubelet[2842]: I0123 18:28:48.232785 2842 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gm6nh\" (UniqueName: \"kubernetes.io/projected/2f66ac8e-bae7-47e7-aa6e-37d83efdc54b-kube-api-access-gm6nh\") on node \"localhost\" DevicePath \"\"" Jan 23 18:28:48.233589 kubelet[2842]: I0123 18:28:48.232853 2842 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2f66ac8e-bae7-47e7-aa6e-37d83efdc54b-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 23 18:28:48.233589 kubelet[2842]: I0123 18:28:48.232866 2842 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f66ac8e-bae7-47e7-aa6e-37d83efdc54b-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 23 18:28:48.425347 containerd[1632]: time="2026-01-23T18:28:48.424768725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f89f6994b-gxllw,Uid:720b9cd4-1750-46fd-95a5-f9417f9523f5,Namespace:calico-system,Attempt:0,}" Jan 23 18:28:48.493783 systemd[1]: Removed slice kubepods-besteffort-pod2f66ac8e_bae7_47e7_aa6e_37d83efdc54b.slice - libcontainer container kubepods-besteffort-pod2f66ac8e_bae7_47e7_aa6e_37d83efdc54b.slice. Jan 23 18:28:48.874570 systemd-networkd[1517]: calib4e7fe74411: Link UP Jan 23 18:28:48.875663 systemd-networkd[1517]: calib4e7fe74411: Gained carrier Jan 23 18:28:48.899023 containerd[1632]: 2026-01-23 18:28:48.564 [INFO][4054] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:28:48.899023 containerd[1632]: 2026-01-23 18:28:48.667 [INFO][4054] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--f89f6994b--gxllw-eth0 calico-kube-controllers-f89f6994b- calico-system 720b9cd4-1750-46fd-95a5-f9417f9523f5 891 0 2026-01-23 18:28:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f89f6994b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-f89f6994b-gxllw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib4e7fe74411 [] [] }} ContainerID="f5e05c7a6c4e6cd86390eb28f20b0be735ec62f5e9da1ae74eeb8da683b49587" Namespace="calico-system" Pod="calico-kube-controllers-f89f6994b-gxllw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f89f6994b--gxllw-" Jan 23 18:28:48.899023 containerd[1632]: 2026-01-23 18:28:48.668 [INFO][4054] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5e05c7a6c4e6cd86390eb28f20b0be735ec62f5e9da1ae74eeb8da683b49587" Namespace="calico-system" Pod="calico-kube-controllers-f89f6994b-gxllw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f89f6994b--gxllw-eth0" Jan 23 18:28:48.899023 containerd[1632]: 2026-01-23 18:28:48.792 [INFO][4081] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5e05c7a6c4e6cd86390eb28f20b0be735ec62f5e9da1ae74eeb8da683b49587" HandleID="k8s-pod-network.f5e05c7a6c4e6cd86390eb28f20b0be735ec62f5e9da1ae74eeb8da683b49587" Workload="localhost-k8s-calico--kube--controllers--f89f6994b--gxllw-eth0" Jan 23 18:28:48.899324 containerd[1632]: 2026-01-23 18:28:48.794 [INFO][4081] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f5e05c7a6c4e6cd86390eb28f20b0be735ec62f5e9da1ae74eeb8da683b49587" HandleID="k8s-pod-network.f5e05c7a6c4e6cd86390eb28f20b0be735ec62f5e9da1ae74eeb8da683b49587" Workload="localhost-k8s-calico--kube--controllers--f89f6994b--gxllw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000418220), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-f89f6994b-gxllw", "timestamp":"2026-01-23 18:28:48.792815233 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:28:48.899324 containerd[1632]: 2026-01-23 18:28:48.794 [INFO][4081] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:28:48.899324 containerd[1632]: 2026-01-23 18:28:48.795 [INFO][4081] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:28:48.899324 containerd[1632]: 2026-01-23 18:28:48.795 [INFO][4081] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:28:48.899324 containerd[1632]: 2026-01-23 18:28:48.808 [INFO][4081] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f5e05c7a6c4e6cd86390eb28f20b0be735ec62f5e9da1ae74eeb8da683b49587" host="localhost" Jan 23 18:28:48.899324 containerd[1632]: 2026-01-23 18:28:48.819 [INFO][4081] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:28:48.899324 containerd[1632]: 2026-01-23 18:28:48.827 [INFO][4081] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:28:48.899324 containerd[1632]: 2026-01-23 18:28:48.829 [INFO][4081] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:28:48.899324 containerd[1632]: 2026-01-23 18:28:48.833 [INFO][4081] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:28:48.899324 containerd[1632]: 2026-01-23 18:28:48.833 [INFO][4081] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f5e05c7a6c4e6cd86390eb28f20b0be735ec62f5e9da1ae74eeb8da683b49587" host="localhost" Jan 23 18:28:48.899639 containerd[1632]: 2026-01-23 18:28:48.836 [INFO][4081] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f5e05c7a6c4e6cd86390eb28f20b0be735ec62f5e9da1ae74eeb8da683b49587 Jan 23 18:28:48.899639 containerd[1632]: 2026-01-23 18:28:48.841 [INFO][4081] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f5e05c7a6c4e6cd86390eb28f20b0be735ec62f5e9da1ae74eeb8da683b49587" host="localhost" Jan 23 18:28:48.899639 containerd[1632]: 2026-01-23 18:28:48.853 [INFO][4081] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f5e05c7a6c4e6cd86390eb28f20b0be735ec62f5e9da1ae74eeb8da683b49587" host="localhost" Jan 23 18:28:48.899639 containerd[1632]: 2026-01-23 18:28:48.854 [INFO][4081] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f5e05c7a6c4e6cd86390eb28f20b0be735ec62f5e9da1ae74eeb8da683b49587" host="localhost" Jan 23 18:28:48.899639 containerd[1632]: 2026-01-23 18:28:48.854 [INFO][4081] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:28:48.899639 containerd[1632]: 2026-01-23 18:28:48.854 [INFO][4081] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f5e05c7a6c4e6cd86390eb28f20b0be735ec62f5e9da1ae74eeb8da683b49587" HandleID="k8s-pod-network.f5e05c7a6c4e6cd86390eb28f20b0be735ec62f5e9da1ae74eeb8da683b49587" Workload="localhost-k8s-calico--kube--controllers--f89f6994b--gxllw-eth0" Jan 23 18:28:48.899842 containerd[1632]: 2026-01-23 18:28:48.860 [INFO][4054] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5e05c7a6c4e6cd86390eb28f20b0be735ec62f5e9da1ae74eeb8da683b49587" Namespace="calico-system" Pod="calico-kube-controllers-f89f6994b-gxllw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f89f6994b--gxllw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f89f6994b--gxllw-eth0", GenerateName:"calico-kube-controllers-f89f6994b-", Namespace:"calico-system", SelfLink:"", UID:"720b9cd4-1750-46fd-95a5-f9417f9523f5", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 28, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f89f6994b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-f89f6994b-gxllw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib4e7fe74411", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:28:48.899936 containerd[1632]: 2026-01-23 18:28:48.861 [INFO][4054] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f5e05c7a6c4e6cd86390eb28f20b0be735ec62f5e9da1ae74eeb8da683b49587" Namespace="calico-system" Pod="calico-kube-controllers-f89f6994b-gxllw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f89f6994b--gxllw-eth0" Jan 23 18:28:48.899936 containerd[1632]: 2026-01-23 18:28:48.861 [INFO][4054] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4e7fe74411 ContainerID="f5e05c7a6c4e6cd86390eb28f20b0be735ec62f5e9da1ae74eeb8da683b49587" Namespace="calico-system" Pod="calico-kube-controllers-f89f6994b-gxllw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f89f6994b--gxllw-eth0" Jan 23 18:28:48.899936 containerd[1632]: 2026-01-23 18:28:48.876 [INFO][4054] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5e05c7a6c4e6cd86390eb28f20b0be735ec62f5e9da1ae74eeb8da683b49587" Namespace="calico-system" Pod="calico-kube-controllers-f89f6994b-gxllw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f89f6994b--gxllw-eth0" Jan 23 18:28:48.900002 containerd[1632]: 2026-01-23 18:28:48.876 [INFO][4054] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5e05c7a6c4e6cd86390eb28f20b0be735ec62f5e9da1ae74eeb8da683b49587" Namespace="calico-system" Pod="calico-kube-controllers-f89f6994b-gxllw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f89f6994b--gxllw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f89f6994b--gxllw-eth0", GenerateName:"calico-kube-controllers-f89f6994b-", Namespace:"calico-system", SelfLink:"", UID:"720b9cd4-1750-46fd-95a5-f9417f9523f5", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 28, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f89f6994b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5e05c7a6c4e6cd86390eb28f20b0be735ec62f5e9da1ae74eeb8da683b49587", Pod:"calico-kube-controllers-f89f6994b-gxllw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib4e7fe74411", MAC:"de:4b:18:c5:71:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:28:48.900123 containerd[1632]: 2026-01-23 18:28:48.893 [INFO][4054] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5e05c7a6c4e6cd86390eb28f20b0be735ec62f5e9da1ae74eeb8da683b49587" Namespace="calico-system" Pod="calico-kube-controllers-f89f6994b-gxllw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f89f6994b--gxllw-eth0" Jan 23 18:28:48.975424 kubelet[2842]: E0123 18:28:48.975289 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:49.033923 containerd[1632]: time="2026-01-23T18:28:49.033729731Z" level=info msg="connecting to shim f5e05c7a6c4e6cd86390eb28f20b0be735ec62f5e9da1ae74eeb8da683b49587" address="unix:///run/containerd/s/1c16c9e8e5ad5416d94da29dde9028a9587854c6187a7d954d9385e2c1d92e5e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:28:49.121640 systemd[1]: Started cri-containerd-f5e05c7a6c4e6cd86390eb28f20b0be735ec62f5e9da1ae74eeb8da683b49587.scope - libcontainer container f5e05c7a6c4e6cd86390eb28f20b0be735ec62f5e9da1ae74eeb8da683b49587. Jan 23 18:28:49.152841 systemd[1]: Created slice kubepods-besteffort-podb1a5247f_3dd0_4a60_b451_df40ad40b033.slice - libcontainer container kubepods-besteffort-podb1a5247f_3dd0_4a60_b451_df40ad40b033.slice. Jan 23 18:28:49.182697 kubelet[2842]: I0123 18:28:49.182109 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1a5247f-3dd0-4a60-b451-df40ad40b033-whisker-ca-bundle\") pod \"whisker-59cc6d476d-zc49f\" (UID: \"b1a5247f-3dd0-4a60-b451-df40ad40b033\") " pod="calico-system/whisker-59cc6d476d-zc49f" Jan 23 18:28:49.182697 kubelet[2842]: I0123 18:28:49.182176 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b1a5247f-3dd0-4a60-b451-df40ad40b033-whisker-backend-key-pair\") pod \"whisker-59cc6d476d-zc49f\" (UID: \"b1a5247f-3dd0-4a60-b451-df40ad40b033\") " pod="calico-system/whisker-59cc6d476d-zc49f" Jan 23 18:28:49.182697 kubelet[2842]: I0123 18:28:49.182317 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8fh7\" (UniqueName: \"kubernetes.io/projected/b1a5247f-3dd0-4a60-b451-df40ad40b033-kube-api-access-z8fh7\") pod \"whisker-59cc6d476d-zc49f\" (UID: \"b1a5247f-3dd0-4a60-b451-df40ad40b033\") " pod="calico-system/whisker-59cc6d476d-zc49f" Jan 23 18:28:49.207000 audit: BPF prog-id=180 op=LOAD Jan 23 18:28:49.208000 audit: BPF prog-id=181 op=LOAD Jan 23 18:28:49.208000 audit[4136]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4110 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:49.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635653035633761366334653663643836333930656232386632306230 Jan 23 18:28:49.208000 audit: BPF prog-id=181 op=UNLOAD Jan 23 18:28:49.208000 audit[4136]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4110 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:49.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635653035633761366334653663643836333930656232386632306230 Jan 23 18:28:49.209000 audit: BPF prog-id=182 op=LOAD Jan 23 18:28:49.209000 audit[4136]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4110 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:49.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635653035633761366334653663643836333930656232386632306230 Jan 23 18:28:49.209000 audit: BPF prog-id=183 op=LOAD Jan 23 18:28:49.209000 audit[4136]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4110 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:49.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635653035633761366334653663643836333930656232386632306230 Jan 23 18:28:49.209000 audit: BPF prog-id=183 op=UNLOAD Jan 23 18:28:49.209000 audit[4136]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4110 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:49.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635653035633761366334653663643836333930656232386632306230 Jan 23 18:28:49.209000 audit: BPF prog-id=182 op=UNLOAD Jan 23 18:28:49.209000 audit[4136]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4110 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:49.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635653035633761366334653663643836333930656232386632306230 Jan 23 18:28:49.209000 audit: BPF prog-id=184 op=LOAD Jan 23 18:28:49.209000 audit[4136]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4110 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:49.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635653035633761366334653663643836333930656232386632306230 Jan 23 18:28:49.212763 systemd-resolved[1288]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:28:49.288790 containerd[1632]: time="2026-01-23T18:28:49.288693295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f89f6994b-gxllw,Uid:720b9cd4-1750-46fd-95a5-f9417f9523f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"f5e05c7a6c4e6cd86390eb28f20b0be735ec62f5e9da1ae74eeb8da683b49587\"" Jan 23 18:28:49.319803 containerd[1632]: time="2026-01-23T18:28:49.319569130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:28:49.381748 containerd[1632]: time="2026-01-23T18:28:49.381637341Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:28:49.383679 containerd[1632]: time="2026-01-23T18:28:49.383491393Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:28:49.383679 containerd[1632]: time="2026-01-23T18:28:49.383539771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 18:28:49.384039 kubelet[2842]: E0123 18:28:49.383929 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:28:49.384171 kubelet[2842]: E0123 18:28:49.384044 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:28:49.384704 kubelet[2842]: E0123 18:28:49.384467 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sbrtc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f89f6994b-gxllw_calico-system(720b9cd4-1750-46fd-95a5-f9417f9523f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:28:49.386135 kubelet[2842]: E0123 18:28:49.385999 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f89f6994b-gxllw" podUID="720b9cd4-1750-46fd-95a5-f9417f9523f5" Jan 23 18:28:49.476652 containerd[1632]: time="2026-01-23T18:28:49.476167828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59cc6d476d-zc49f,Uid:b1a5247f-3dd0-4a60-b451-df40ad40b033,Namespace:calico-system,Attempt:0,}" Jan 23 18:28:49.705562 systemd-networkd[1517]: cali1a104c14142: Link UP Jan 23 18:28:49.707023 systemd-networkd[1517]: cali1a104c14142: Gained carrier Jan 23 18:28:49.731226 containerd[1632]: 2026-01-23 18:28:49.531 [INFO][4171] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:28:49.731226 containerd[1632]: 2026-01-23 18:28:49.571 [INFO][4171] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--59cc6d476d--zc49f-eth0 whisker-59cc6d476d- calico-system b1a5247f-3dd0-4a60-b451-df40ad40b033 971 0 2026-01-23 18:28:49 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:59cc6d476d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-59cc6d476d-zc49f eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1a104c14142 [] [] }} ContainerID="9fc63d237f8397db016e9d7e4b628879dcb40f63e8b78d1ca972230b9d4b7217" Namespace="calico-system" Pod="whisker-59cc6d476d-zc49f" WorkloadEndpoint="localhost-k8s-whisker--59cc6d476d--zc49f-" Jan 23 18:28:49.731226 containerd[1632]: 2026-01-23 18:28:49.571 [INFO][4171] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9fc63d237f8397db016e9d7e4b628879dcb40f63e8b78d1ca972230b9d4b7217" Namespace="calico-system" Pod="whisker-59cc6d476d-zc49f" WorkloadEndpoint="localhost-k8s-whisker--59cc6d476d--zc49f-eth0" Jan 23 18:28:49.731226 containerd[1632]: 2026-01-23 18:28:49.614 [INFO][4185] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9fc63d237f8397db016e9d7e4b628879dcb40f63e8b78d1ca972230b9d4b7217" HandleID="k8s-pod-network.9fc63d237f8397db016e9d7e4b628879dcb40f63e8b78d1ca972230b9d4b7217" Workload="localhost-k8s-whisker--59cc6d476d--zc49f-eth0" Jan 23 18:28:49.733115 containerd[1632]: 2026-01-23 18:28:49.614 [INFO][4185] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9fc63d237f8397db016e9d7e4b628879dcb40f63e8b78d1ca972230b9d4b7217" HandleID="k8s-pod-network.9fc63d237f8397db016e9d7e4b628879dcb40f63e8b78d1ca972230b9d4b7217" Workload="localhost-k8s-whisker--59cc6d476d--zc49f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034e210), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-59cc6d476d-zc49f", "timestamp":"2026-01-23 18:28:49.614023543 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:28:49.733115 containerd[1632]: 2026-01-23 18:28:49.614 [INFO][4185] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:28:49.733115 containerd[1632]: 2026-01-23 18:28:49.614 [INFO][4185] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:28:49.733115 containerd[1632]: 2026-01-23 18:28:49.614 [INFO][4185] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:28:49.733115 containerd[1632]: 2026-01-23 18:28:49.630 [INFO][4185] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9fc63d237f8397db016e9d7e4b628879dcb40f63e8b78d1ca972230b9d4b7217" host="localhost" Jan 23 18:28:49.733115 containerd[1632]: 2026-01-23 18:28:49.642 [INFO][4185] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:28:49.733115 containerd[1632]: 2026-01-23 18:28:49.665 [INFO][4185] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:28:49.733115 containerd[1632]: 2026-01-23 18:28:49.668 [INFO][4185] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:28:49.733115 containerd[1632]: 2026-01-23 18:28:49.672 [INFO][4185] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:28:49.733115 containerd[1632]: 2026-01-23 18:28:49.672 [INFO][4185] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9fc63d237f8397db016e9d7e4b628879dcb40f63e8b78d1ca972230b9d4b7217" host="localhost" Jan 23 18:28:49.733699 containerd[1632]: 2026-01-23 18:28:49.676 [INFO][4185] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9fc63d237f8397db016e9d7e4b628879dcb40f63e8b78d1ca972230b9d4b7217 Jan 23 18:28:49.733699 containerd[1632]: 2026-01-23 18:28:49.684 [INFO][4185] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9fc63d237f8397db016e9d7e4b628879dcb40f63e8b78d1ca972230b9d4b7217" host="localhost" Jan 23 18:28:49.733699 containerd[1632]: 2026-01-23 18:28:49.693 [INFO][4185] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9fc63d237f8397db016e9d7e4b628879dcb40f63e8b78d1ca972230b9d4b7217" host="localhost" Jan 23 18:28:49.733699 containerd[1632]: 2026-01-23 18:28:49.693 [INFO][4185] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9fc63d237f8397db016e9d7e4b628879dcb40f63e8b78d1ca972230b9d4b7217" host="localhost" Jan 23 18:28:49.733699 containerd[1632]: 2026-01-23 18:28:49.693 [INFO][4185] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:28:49.733699 containerd[1632]: 2026-01-23 18:28:49.693 [INFO][4185] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9fc63d237f8397db016e9d7e4b628879dcb40f63e8b78d1ca972230b9d4b7217" HandleID="k8s-pod-network.9fc63d237f8397db016e9d7e4b628879dcb40f63e8b78d1ca972230b9d4b7217" Workload="localhost-k8s-whisker--59cc6d476d--zc49f-eth0" Jan 23 18:28:49.733888 containerd[1632]: 2026-01-23 18:28:49.700 [INFO][4171] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9fc63d237f8397db016e9d7e4b628879dcb40f63e8b78d1ca972230b9d4b7217" Namespace="calico-system" Pod="whisker-59cc6d476d-zc49f" WorkloadEndpoint="localhost-k8s-whisker--59cc6d476d--zc49f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--59cc6d476d--zc49f-eth0", GenerateName:"whisker-59cc6d476d-", Namespace:"calico-system", SelfLink:"", UID:"b1a5247f-3dd0-4a60-b451-df40ad40b033", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 28, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59cc6d476d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-59cc6d476d-zc49f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1a104c14142", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:28:49.733888 containerd[1632]: 2026-01-23 18:28:49.701 [INFO][4171] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9fc63d237f8397db016e9d7e4b628879dcb40f63e8b78d1ca972230b9d4b7217" Namespace="calico-system" Pod="whisker-59cc6d476d-zc49f" WorkloadEndpoint="localhost-k8s-whisker--59cc6d476d--zc49f-eth0" Jan 23 18:28:49.734022 containerd[1632]: 2026-01-23 18:28:49.701 [INFO][4171] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a104c14142 ContainerID="9fc63d237f8397db016e9d7e4b628879dcb40f63e8b78d1ca972230b9d4b7217" Namespace="calico-system" Pod="whisker-59cc6d476d-zc49f" WorkloadEndpoint="localhost-k8s-whisker--59cc6d476d--zc49f-eth0" Jan 23 18:28:49.734022 containerd[1632]: 2026-01-23 18:28:49.707 [INFO][4171] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9fc63d237f8397db016e9d7e4b628879dcb40f63e8b78d1ca972230b9d4b7217" Namespace="calico-system" Pod="whisker-59cc6d476d-zc49f" WorkloadEndpoint="localhost-k8s-whisker--59cc6d476d--zc49f-eth0" Jan 23 18:28:49.734122 containerd[1632]: 2026-01-23 18:28:49.708 [INFO][4171] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9fc63d237f8397db016e9d7e4b628879dcb40f63e8b78d1ca972230b9d4b7217" Namespace="calico-system" Pod="whisker-59cc6d476d-zc49f" WorkloadEndpoint="localhost-k8s-whisker--59cc6d476d--zc49f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--59cc6d476d--zc49f-eth0", GenerateName:"whisker-59cc6d476d-", Namespace:"calico-system", SelfLink:"", UID:"b1a5247f-3dd0-4a60-b451-df40ad40b033", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 28, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59cc6d476d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9fc63d237f8397db016e9d7e4b628879dcb40f63e8b78d1ca972230b9d4b7217", Pod:"whisker-59cc6d476d-zc49f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1a104c14142", MAC:"ee:97:bc:b6:9d:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:28:49.734235 containerd[1632]: 2026-01-23 18:28:49.722 [INFO][4171] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9fc63d237f8397db016e9d7e4b628879dcb40f63e8b78d1ca972230b9d4b7217" Namespace="calico-system" Pod="whisker-59cc6d476d-zc49f" WorkloadEndpoint="localhost-k8s-whisker--59cc6d476d--zc49f-eth0" Jan 23 18:28:49.938310 containerd[1632]: time="2026-01-23T18:28:49.938179296Z" level=info msg="connecting to shim 9fc63d237f8397db016e9d7e4b628879dcb40f63e8b78d1ca972230b9d4b7217" address="unix:///run/containerd/s/0f4474e5bdce90df621c77f887de41b31f576f6d414eb8a1f698f387a4b6645a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:28:49.993136 kubelet[2842]: E0123 18:28:49.992855 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f89f6994b-gxllw" podUID="720b9cd4-1750-46fd-95a5-f9417f9523f5" Jan 23 18:28:50.065287 systemd[1]: Started cri-containerd-9fc63d237f8397db016e9d7e4b628879dcb40f63e8b78d1ca972230b9d4b7217.scope - libcontainer container 9fc63d237f8397db016e9d7e4b628879dcb40f63e8b78d1ca972230b9d4b7217. Jan 23 18:28:50.111000 audit: BPF prog-id=185 op=LOAD Jan 23 18:28:50.112000 audit: BPF prog-id=186 op=LOAD Jan 23 18:28:50.112000 audit[4315]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00020c238 a2=98 a3=0 items=0 ppid=4302 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966633633643233376638333937646230313665396437653462363238 Jan 23 18:28:50.112000 audit: BPF prog-id=186 op=UNLOAD Jan 23 18:28:50.112000 audit[4315]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4302 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966633633643233376638333937646230313665396437653462363238 Jan 23 18:28:50.112000 audit: BPF prog-id=187 op=LOAD Jan 23 18:28:50.112000 audit[4315]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00020c488 a2=98 a3=0 items=0 ppid=4302 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966633633643233376638333937646230313665396437653462363238 Jan 23 18:28:50.112000 audit: BPF prog-id=188 op=LOAD Jan 23 18:28:50.112000 audit[4315]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00020c218 a2=98 a3=0 items=0 ppid=4302 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966633633643233376638333937646230313665396437653462363238 Jan 23 18:28:50.113000 audit: BPF prog-id=188 op=UNLOAD Jan 23 18:28:50.113000 audit[4315]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4302 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.113000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966633633643233376638333937646230313665396437653462363238 Jan 23 18:28:50.113000 audit: BPF prog-id=187 op=UNLOAD Jan 23 18:28:50.113000 audit[4315]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4302 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.113000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966633633643233376638333937646230313665396437653462363238 Jan 23 18:28:50.113000 audit: BPF prog-id=189 op=LOAD Jan 23 18:28:50.113000 audit[4315]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00020c6e8 a2=98 a3=0 items=0 ppid=4302 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.113000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966633633643233376638333937646230313665396437653462363238 Jan 23 18:28:50.117132 systemd-resolved[1288]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:28:50.229185 systemd-networkd[1517]: calib4e7fe74411: Gained IPv6LL Jan 23 18:28:50.266000 audit: BPF prog-id=190 op=LOAD Jan 23 18:28:50.266000 audit[4353]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdd0e49ed0 a2=98 a3=1fffffffffffffff items=0 ppid=4204 pid=4353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.266000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:28:50.276000 audit: BPF prog-id=190 op=UNLOAD Jan 23 18:28:50.276000 audit[4353]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdd0e49ea0 a3=0 items=0 ppid=4204 pid=4353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.276000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:28:50.276000 audit: BPF prog-id=191 op=LOAD Jan 23 18:28:50.276000 audit[4353]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdd0e49db0 a2=94 a3=3 items=0 ppid=4204 pid=4353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.276000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:28:50.276000 audit: BPF prog-id=191 op=UNLOAD Jan 23 18:28:50.276000 audit[4353]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdd0e49db0 a2=94 a3=3 items=0 ppid=4204 pid=4353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.276000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:28:50.276000 audit: BPF prog-id=192 op=LOAD Jan 23 18:28:50.276000 audit[4353]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdd0e49df0 a2=94 a3=7ffdd0e49fd0 items=0 ppid=4204 pid=4353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.276000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:28:50.276000 audit: BPF prog-id=192 op=UNLOAD Jan 23 18:28:50.276000 audit[4353]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdd0e49df0 a2=94 a3=7ffdd0e49fd0 items=0 ppid=4204 pid=4353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.276000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:28:50.292000 audit: BPF prog-id=193 op=LOAD Jan 23 18:28:50.292000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffffa9367c0 a2=98 a3=3 items=0 ppid=4204 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.292000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:28:50.292000 audit: BPF prog-id=193 op=UNLOAD Jan 23 18:28:50.292000 audit[4354]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffffa936790 a3=0 items=0 ppid=4204 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.292000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:28:50.293000 audit: BPF prog-id=194 op=LOAD Jan 23 18:28:50.293000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffffa9365b0 a2=94 a3=54428f items=0 ppid=4204 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.293000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:28:50.294000 audit: BPF prog-id=194 op=UNLOAD Jan 23 18:28:50.294000 audit[4354]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffffa9365b0 a2=94 a3=54428f items=0 ppid=4204 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.294000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:28:50.294000 audit: BPF prog-id=195 op=LOAD Jan 23 18:28:50.294000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffffa9365e0 a2=94 a3=2 items=0 ppid=4204 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.294000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:28:50.294000 audit: BPF prog-id=195 op=UNLOAD Jan 23 18:28:50.294000 audit[4354]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffffa9365e0 a2=0 a3=2 items=0 ppid=4204 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.294000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:28:50.322009 containerd[1632]: time="2026-01-23T18:28:50.321831714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59cc6d476d-zc49f,Uid:b1a5247f-3dd0-4a60-b451-df40ad40b033,Namespace:calico-system,Attempt:0,} returns sandbox id \"9fc63d237f8397db016e9d7e4b628879dcb40f63e8b78d1ca972230b9d4b7217\"" Jan 23 18:28:50.329447 containerd[1632]: time="2026-01-23T18:28:50.329362369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:28:50.413701 containerd[1632]: time="2026-01-23T18:28:50.412202575Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:28:50.424464 containerd[1632]: time="2026-01-23T18:28:50.423849993Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:28:50.424552 containerd[1632]: time="2026-01-23T18:28:50.423938465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 18:28:50.425172 kubelet[2842]: E0123 18:28:50.424944 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:28:50.425343 kubelet[2842]: E0123 18:28:50.425192 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:28:50.426746 kubelet[2842]: E0123 18:28:50.425983 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2adf5ce0a43f474faed108d4fa915a26,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z8fh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cc6d476d-zc49f_calico-system(b1a5247f-3dd0-4a60-b451-df40ad40b033): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:28:50.426746 kubelet[2842]: E0123 18:28:50.425123 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:50.445274 kubelet[2842]: E0123 18:28:50.436986 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:50.445349 containerd[1632]: time="2026-01-23T18:28:50.434224255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4j94s,Uid:308cfc11-36f8-46bc-bb62-85fc8219ee01,Namespace:kube-system,Attempt:0,}" Jan 23 18:28:50.456811 containerd[1632]: time="2026-01-23T18:28:50.454573055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:28:50.459161 containerd[1632]: time="2026-01-23T18:28:50.457354809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x2dz4,Uid:06cbbdec-0484-47bb-b6b9-aae05580b8cd,Namespace:kube-system,Attempt:0,}" Jan 23 18:28:50.459888 containerd[1632]: time="2026-01-23T18:28:50.459813845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6469486c9-vgp5c,Uid:45c79f90-5bfc-4e7b-ac61-b9e42301e7a5,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:28:50.467934 kubelet[2842]: I0123 18:28:50.467883 2842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f66ac8e-bae7-47e7-aa6e-37d83efdc54b" path="/var/lib/kubelet/pods/2f66ac8e-bae7-47e7-aa6e-37d83efdc54b/volumes" Jan 23 18:28:50.529996 containerd[1632]: time="2026-01-23T18:28:50.529801538Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:28:50.531226 containerd[1632]: time="2026-01-23T18:28:50.531192579Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:28:50.531360 containerd[1632]: time="2026-01-23T18:28:50.531232395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 18:28:50.532125 kubelet[2842]: E0123 18:28:50.532016 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:28:50.533528 kubelet[2842]: E0123 18:28:50.532496 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:28:50.535467 kubelet[2842]: E0123 18:28:50.534025 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8fh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cc6d476d-zc49f_calico-system(b1a5247f-3dd0-4a60-b451-df40ad40b033): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:28:50.536984 kubelet[2842]: E0123 18:28:50.536937 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59cc6d476d-zc49f" podUID="b1a5247f-3dd0-4a60-b451-df40ad40b033" Jan 23 18:28:50.808353 systemd-networkd[1517]: cali1a104c14142: Gained IPv6LL Jan 23 18:28:50.935000 audit: BPF prog-id=196 op=LOAD Jan 23 18:28:50.935000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffffa9364a0 a2=94 a3=1 items=0 ppid=4204 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.935000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:28:50.935000 audit: BPF prog-id=196 op=UNLOAD Jan 23 18:28:50.935000 audit[4354]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffffa9364a0 a2=94 a3=1 items=0 ppid=4204 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.935000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:28:50.948612 systemd-networkd[1517]: calibc58edbb86d: Link UP Jan 23 18:28:50.949567 systemd-networkd[1517]: calibc58edbb86d: Gained carrier Jan 23 18:28:50.970000 audit: BPF prog-id=197 op=LOAD Jan 23 18:28:50.970000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffffa936490 a2=94 a3=4 items=0 ppid=4204 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.970000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:28:50.970000 audit: BPF prog-id=197 op=UNLOAD Jan 23 18:28:50.970000 audit[4354]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffffa936490 a2=0 a3=4 items=0 ppid=4204 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.970000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:28:50.971000 audit: BPF prog-id=198 op=LOAD Jan 23 18:28:50.971000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffffa9362f0 a2=94 a3=5 items=0 ppid=4204 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.971000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:28:50.972000 audit: BPF prog-id=198 op=UNLOAD Jan 23 18:28:50.972000 audit[4354]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffffa9362f0 a2=0 a3=5 items=0 ppid=4204 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.972000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:28:50.972000 audit: BPF prog-id=199 op=LOAD Jan 23 18:28:50.972000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffffa936510 a2=94 a3=6 items=0 ppid=4204 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.972000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:28:50.972000 audit: BPF prog-id=199 op=UNLOAD Jan 23 18:28:50.972000 audit[4354]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffffa936510 a2=0 a3=6 items=0 ppid=4204 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.972000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:28:50.973000 audit: BPF prog-id=200 op=LOAD Jan 23 18:28:50.973000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffffa935cc0 a2=94 a3=88 items=0 ppid=4204 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.973000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:28:50.974000 audit: BPF prog-id=201 op=LOAD Jan 23 18:28:50.974000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffffa935b40 a2=94 a3=2 items=0 ppid=4204 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.974000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:28:50.974000 audit: BPF prog-id=201 op=UNLOAD Jan 23 18:28:50.974000 audit[4354]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffffa935b70 a2=0 a3=7ffffa935c70 items=0 ppid=4204 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.974000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:28:50.975000 audit: BPF prog-id=200 op=UNLOAD Jan 23 18:28:50.975000 audit[4354]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3bc2cd10 a2=0 a3=af2c4754e5f14118 items=0 ppid=4204 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:50.975000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:28:51.017553 kubelet[2842]: E0123 18:28:51.017339 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f89f6994b-gxllw" podUID="720b9cd4-1750-46fd-95a5-f9417f9523f5" Jan 23 18:28:51.020546 containerd[1632]: 2026-01-23 18:28:50.678 [INFO][4393] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6469486c9--vgp5c-eth0 calico-apiserver-6469486c9- calico-apiserver 45c79f90-5bfc-4e7b-ac61-b9e42301e7a5 888 0 2026-01-23 18:28:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6469486c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6469486c9-vgp5c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibc58edbb86d [] [] }} ContainerID="200c2c4837423fda3dfd99889a15602b1d5c7d9dfad05bde0579fb25a96c44e9" Namespace="calico-apiserver" Pod="calico-apiserver-6469486c9-vgp5c" WorkloadEndpoint="localhost-k8s-calico--apiserver--6469486c9--vgp5c-" Jan 23 18:28:51.020546 containerd[1632]: 2026-01-23 18:28:50.680 [INFO][4393] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="200c2c4837423fda3dfd99889a15602b1d5c7d9dfad05bde0579fb25a96c44e9" Namespace="calico-apiserver" Pod="calico-apiserver-6469486c9-vgp5c" WorkloadEndpoint="localhost-k8s-calico--apiserver--6469486c9--vgp5c-eth0" Jan 23 18:28:51.020546 containerd[1632]: 2026-01-23 18:28:50.782 [INFO][4417] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="200c2c4837423fda3dfd99889a15602b1d5c7d9dfad05bde0579fb25a96c44e9" HandleID="k8s-pod-network.200c2c4837423fda3dfd99889a15602b1d5c7d9dfad05bde0579fb25a96c44e9" Workload="localhost-k8s-calico--apiserver--6469486c9--vgp5c-eth0" Jan 23 18:28:51.035963 containerd[1632]: 2026-01-23 18:28:50.782 [INFO][4417] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="200c2c4837423fda3dfd99889a15602b1d5c7d9dfad05bde0579fb25a96c44e9" HandleID="k8s-pod-network.200c2c4837423fda3dfd99889a15602b1d5c7d9dfad05bde0579fb25a96c44e9" Workload="localhost-k8s-calico--apiserver--6469486c9--vgp5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00051dbd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6469486c9-vgp5c", "timestamp":"2026-01-23 18:28:50.782228105 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:28:51.035963 containerd[1632]: 2026-01-23 18:28:50.782 [INFO][4417] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:28:51.035963 containerd[1632]: 2026-01-23 18:28:50.783 [INFO][4417] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:28:51.035963 containerd[1632]: 2026-01-23 18:28:50.783 [INFO][4417] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:28:51.035963 containerd[1632]: 2026-01-23 18:28:50.801 [INFO][4417] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.200c2c4837423fda3dfd99889a15602b1d5c7d9dfad05bde0579fb25a96c44e9" host="localhost" Jan 23 18:28:51.035963 containerd[1632]: 2026-01-23 18:28:50.847 [INFO][4417] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:28:51.035963 containerd[1632]: 2026-01-23 18:28:50.899 [INFO][4417] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:28:51.035963 containerd[1632]: 2026-01-23 18:28:50.904 [INFO][4417] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:28:51.035963 containerd[1632]: 2026-01-23 18:28:50.909 [INFO][4417] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:28:51.035963 containerd[1632]: 2026-01-23 18:28:50.909 [INFO][4417] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.200c2c4837423fda3dfd99889a15602b1d5c7d9dfad05bde0579fb25a96c44e9" host="localhost" Jan 23 18:28:51.044196 containerd[1632]: 2026-01-23 18:28:50.912 [INFO][4417] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.200c2c4837423fda3dfd99889a15602b1d5c7d9dfad05bde0579fb25a96c44e9 Jan 23 18:28:51.044196 containerd[1632]: 2026-01-23 18:28:50.919 [INFO][4417] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.200c2c4837423fda3dfd99889a15602b1d5c7d9dfad05bde0579fb25a96c44e9" host="localhost" Jan 23 18:28:51.044196 containerd[1632]: 2026-01-23 18:28:50.930 [INFO][4417] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.200c2c4837423fda3dfd99889a15602b1d5c7d9dfad05bde0579fb25a96c44e9" host="localhost" Jan 23 18:28:51.044196 containerd[1632]: 2026-01-23 18:28:50.930 [INFO][4417] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.200c2c4837423fda3dfd99889a15602b1d5c7d9dfad05bde0579fb25a96c44e9" host="localhost" Jan 23 18:28:51.044196 containerd[1632]: 2026-01-23 18:28:50.931 [INFO][4417] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:28:51.044196 containerd[1632]: 2026-01-23 18:28:50.931 [INFO][4417] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="200c2c4837423fda3dfd99889a15602b1d5c7d9dfad05bde0579fb25a96c44e9" HandleID="k8s-pod-network.200c2c4837423fda3dfd99889a15602b1d5c7d9dfad05bde0579fb25a96c44e9" Workload="localhost-k8s-calico--apiserver--6469486c9--vgp5c-eth0" Jan 23 18:28:51.044351 kubelet[2842]: E0123 18:28:51.042223 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59cc6d476d-zc49f" podUID="b1a5247f-3dd0-4a60-b451-df40ad40b033" Jan 23 18:28:51.053187 containerd[1632]: 2026-01-23 18:28:50.940 [INFO][4393] cni-plugin/k8s.go 418: Populated endpoint ContainerID="200c2c4837423fda3dfd99889a15602b1d5c7d9dfad05bde0579fb25a96c44e9" Namespace="calico-apiserver" Pod="calico-apiserver-6469486c9-vgp5c" WorkloadEndpoint="localhost-k8s-calico--apiserver--6469486c9--vgp5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6469486c9--vgp5c-eth0", GenerateName:"calico-apiserver-6469486c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"45c79f90-5bfc-4e7b-ac61-b9e42301e7a5", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 28, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6469486c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6469486c9-vgp5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibc58edbb86d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:28:51.054198 containerd[1632]: 2026-01-23 18:28:50.940 [INFO][4393] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="200c2c4837423fda3dfd99889a15602b1d5c7d9dfad05bde0579fb25a96c44e9" Namespace="calico-apiserver" Pod="calico-apiserver-6469486c9-vgp5c" WorkloadEndpoint="localhost-k8s-calico--apiserver--6469486c9--vgp5c-eth0" Jan 23 18:28:51.054198 containerd[1632]: 2026-01-23 18:28:50.940 [INFO][4393] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc58edbb86d ContainerID="200c2c4837423fda3dfd99889a15602b1d5c7d9dfad05bde0579fb25a96c44e9" Namespace="calico-apiserver" Pod="calico-apiserver-6469486c9-vgp5c" WorkloadEndpoint="localhost-k8s-calico--apiserver--6469486c9--vgp5c-eth0" Jan 23 18:28:51.054198 containerd[1632]: 2026-01-23 18:28:50.953 [INFO][4393] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="200c2c4837423fda3dfd99889a15602b1d5c7d9dfad05bde0579fb25a96c44e9" Namespace="calico-apiserver" Pod="calico-apiserver-6469486c9-vgp5c" WorkloadEndpoint="localhost-k8s-calico--apiserver--6469486c9--vgp5c-eth0" Jan 23 18:28:51.055080 containerd[1632]: 2026-01-23 18:28:50.974 [INFO][4393] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="200c2c4837423fda3dfd99889a15602b1d5c7d9dfad05bde0579fb25a96c44e9" Namespace="calico-apiserver" Pod="calico-apiserver-6469486c9-vgp5c" WorkloadEndpoint="localhost-k8s-calico--apiserver--6469486c9--vgp5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6469486c9--vgp5c-eth0", GenerateName:"calico-apiserver-6469486c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"45c79f90-5bfc-4e7b-ac61-b9e42301e7a5", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 28, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6469486c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"200c2c4837423fda3dfd99889a15602b1d5c7d9dfad05bde0579fb25a96c44e9", Pod:"calico-apiserver-6469486c9-vgp5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibc58edbb86d", MAC:"8e:a9:ca:ef:7b:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:28:51.055687 containerd[1632]: 2026-01-23 18:28:50.999 [INFO][4393] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="200c2c4837423fda3dfd99889a15602b1d5c7d9dfad05bde0579fb25a96c44e9" Namespace="calico-apiserver" Pod="calico-apiserver-6469486c9-vgp5c" WorkloadEndpoint="localhost-k8s-calico--apiserver--6469486c9--vgp5c-eth0" Jan 23 18:28:51.064000 audit: BPF prog-id=202 op=LOAD Jan 23 18:28:51.064000 audit[4452]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff40108120 a2=98 a3=1999999999999999 items=0 ppid=4204 pid=4452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.064000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:28:51.064000 audit: BPF prog-id=202 op=UNLOAD Jan 23 18:28:51.064000 audit[4452]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff401080f0 a3=0 items=0 ppid=4204 pid=4452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.064000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:28:51.064000 audit: BPF prog-id=203 op=LOAD Jan 23 18:28:51.064000 audit[4452]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff40108000 a2=94 a3=ffff items=0 ppid=4204 pid=4452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.064000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:28:51.064000 audit: BPF prog-id=203 op=UNLOAD Jan 23 18:28:51.064000 audit[4452]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff40108000 a2=94 a3=ffff items=0 ppid=4204 pid=4452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.064000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:28:51.065000 audit: BPF prog-id=204 op=LOAD Jan 23 18:28:51.065000 audit[4452]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff40108040 a2=94 a3=7fff40108220 items=0 ppid=4204 pid=4452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.065000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:28:51.065000 audit: BPF prog-id=204 op=UNLOAD Jan 23 18:28:51.065000 audit[4452]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff40108040 a2=94 a3=7fff40108220 items=0 ppid=4204 pid=4452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.065000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:28:51.096000 audit[4461]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=4461 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:51.096000 audit[4461]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffa773b520 a2=0 a3=7fffa773b50c items=0 ppid=3005 pid=4461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.096000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:51.101000 audit[4461]: NETFILTER_CFG table=nat:120 family=2 entries=14 op=nft_register_rule pid=4461 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:51.101000 audit[4461]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffa773b520 a2=0 a3=0 items=0 ppid=3005 pid=4461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.101000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:51.180660 systemd-networkd[1517]: calif21c066b152: Link UP Jan 23 18:28:51.181760 systemd-networkd[1517]: calif21c066b152: Gained carrier Jan 23 18:28:51.204579 containerd[1632]: time="2026-01-23T18:28:51.204364689Z" level=info msg="connecting to shim 200c2c4837423fda3dfd99889a15602b1d5c7d9dfad05bde0579fb25a96c44e9" address="unix:///run/containerd/s/4a8abeebba5498072744fd29cabdade89863713faef97b677c5835a4d6368eef" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:28:51.232740 containerd[1632]: 2026-01-23 18:28:50.675 [INFO][4375] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--4j94s-eth0 coredns-674b8bbfcf- kube-system 308cfc11-36f8-46bc-bb62-85fc8219ee01 884 0 2026-01-23 18:27:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-4j94s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif21c066b152 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a" Namespace="kube-system" Pod="coredns-674b8bbfcf-4j94s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4j94s-" Jan 23 18:28:51.232740 containerd[1632]: 2026-01-23 18:28:50.678 [INFO][4375] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a" Namespace="kube-system" Pod="coredns-674b8bbfcf-4j94s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4j94s-eth0" Jan 23 18:28:51.232740 containerd[1632]: 2026-01-23 18:28:50.799 [INFO][4419] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a" HandleID="k8s-pod-network.2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a" Workload="localhost-k8s-coredns--674b8bbfcf--4j94s-eth0" Jan 23 18:28:51.275977 containerd[1632]: 2026-01-23 18:28:50.802 [INFO][4419] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a" HandleID="k8s-pod-network.2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a" Workload="localhost-k8s-coredns--674b8bbfcf--4j94s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00026d6b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-4j94s", "timestamp":"2026-01-23 18:28:50.798991385 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:28:51.275977 containerd[1632]: 2026-01-23 18:28:50.802 [INFO][4419] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:28:51.275977 containerd[1632]: 2026-01-23 18:28:50.931 [INFO][4419] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:28:51.275977 containerd[1632]: 2026-01-23 18:28:50.932 [INFO][4419] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:28:51.275977 containerd[1632]: 2026-01-23 18:28:50.973 [INFO][4419] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a" host="localhost" Jan 23 18:28:51.275977 containerd[1632]: 2026-01-23 18:28:51.007 [INFO][4419] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:28:51.275977 containerd[1632]: 2026-01-23 18:28:51.046 [INFO][4419] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:28:51.275977 containerd[1632]: 2026-01-23 18:28:51.076 [INFO][4419] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:28:51.275977 containerd[1632]: 2026-01-23 18:28:51.093 [INFO][4419] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:28:51.275977 containerd[1632]: 2026-01-23 18:28:51.094 [INFO][4419] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a" host="localhost" Jan 23 18:28:51.277528 containerd[1632]: 2026-01-23 18:28:51.099 [INFO][4419] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a Jan 23 18:28:51.277528 containerd[1632]: 2026-01-23 18:28:51.114 [INFO][4419] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a" host="localhost" Jan 23 18:28:51.277528 containerd[1632]: 2026-01-23 18:28:51.133 [INFO][4419] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a" host="localhost" Jan 23 18:28:51.277528 containerd[1632]: 2026-01-23 18:28:51.133 [INFO][4419] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a" host="localhost" Jan 23 18:28:51.277528 containerd[1632]: 2026-01-23 18:28:51.141 [INFO][4419] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:28:51.277528 containerd[1632]: 2026-01-23 18:28:51.141 [INFO][4419] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a" HandleID="k8s-pod-network.2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a" Workload="localhost-k8s-coredns--674b8bbfcf--4j94s-eth0" Jan 23 18:28:51.278094 containerd[1632]: 2026-01-23 18:28:51.171 [INFO][4375] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a" Namespace="kube-system" Pod="coredns-674b8bbfcf-4j94s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4j94s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--4j94s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"308cfc11-36f8-46bc-bb62-85fc8219ee01", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 27, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-4j94s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif21c066b152", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:28:51.278515 containerd[1632]: 2026-01-23 18:28:51.171 [INFO][4375] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a" Namespace="kube-system" Pod="coredns-674b8bbfcf-4j94s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4j94s-eth0" Jan 23 18:28:51.278515 containerd[1632]: 2026-01-23 18:28:51.171 [INFO][4375] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif21c066b152 ContainerID="2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a" Namespace="kube-system" Pod="coredns-674b8bbfcf-4j94s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4j94s-eth0" Jan 23 18:28:51.278515 containerd[1632]: 2026-01-23 18:28:51.176 [INFO][4375] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a" Namespace="kube-system" Pod="coredns-674b8bbfcf-4j94s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4j94s-eth0" Jan 23 18:28:51.278882 containerd[1632]: 2026-01-23 18:28:51.176 [INFO][4375] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a" Namespace="kube-system" Pod="coredns-674b8bbfcf-4j94s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4j94s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--4j94s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"308cfc11-36f8-46bc-bb62-85fc8219ee01", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 27, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a", Pod:"coredns-674b8bbfcf-4j94s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif21c066b152", MAC:"e6:c4:7e:36:19:07", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:28:51.278882 containerd[1632]: 2026-01-23 18:28:51.199 [INFO][4375] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a" Namespace="kube-system" Pod="coredns-674b8bbfcf-4j94s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4j94s-eth0" Jan 23 18:28:51.339682 containerd[1632]: time="2026-01-23T18:28:51.338312390Z" level=info msg="connecting to shim 2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a" address="unix:///run/containerd/s/76a9f6fd0817544cfe0336bfb2dc35557b3abd810133ffc625bf6a79fa050607" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:28:51.356571 systemd[1]: Started cri-containerd-200c2c4837423fda3dfd99889a15602b1d5c7d9dfad05bde0579fb25a96c44e9.scope - libcontainer container 200c2c4837423fda3dfd99889a15602b1d5c7d9dfad05bde0579fb25a96c44e9. Jan 23 18:28:51.371776 systemd-networkd[1517]: vxlan.calico: Link UP Jan 23 18:28:51.371839 systemd-networkd[1517]: vxlan.calico: Gained carrier Jan 23 18:28:51.415483 containerd[1632]: time="2026-01-23T18:28:51.414456268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fbkc5,Uid:420164f1-10e4-4309-843a-9bf4c7513aff,Namespace:calico-system,Attempt:0,}" Jan 23 18:28:51.416341 containerd[1632]: time="2026-01-23T18:28:51.416279118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f6cd769bc-pxzdx,Uid:7bfb42fc-77fc-4491-a374-12534b8ba3b1,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:28:51.457788 systemd-networkd[1517]: cali511baef1a10: Link UP Jan 23 18:28:51.457000 audit: BPF prog-id=205 op=LOAD Jan 23 18:28:51.459000 audit: BPF prog-id=206 op=LOAD Jan 23 18:28:51.459000 audit[4497]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4481 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230306332633438333734323366646133646664393938383961313536 Jan 23 18:28:51.459000 audit: BPF prog-id=206 op=UNLOAD Jan 23 18:28:51.459000 audit[4497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4481 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230306332633438333734323366646133646664393938383961313536 Jan 23 18:28:51.463538 systemd-networkd[1517]: cali511baef1a10: Gained carrier Jan 23 18:28:51.462000 audit: BPF prog-id=207 op=LOAD Jan 23 18:28:51.462000 audit[4497]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4481 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.462000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230306332633438333734323366646133646664393938383961313536 Jan 23 18:28:51.466000 audit: BPF prog-id=208 op=LOAD Jan 23 18:28:51.466000 audit[4497]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4481 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230306332633438333734323366646133646664393938383961313536 Jan 23 18:28:51.471000 audit: BPF prog-id=208 op=UNLOAD Jan 23 18:28:51.471000 audit[4497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4481 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230306332633438333734323366646133646664393938383961313536 Jan 23 18:28:51.471000 audit: BPF prog-id=207 op=UNLOAD Jan 23 18:28:51.471000 audit[4497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4481 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230306332633438333734323366646133646664393938383961313536 Jan 23 18:28:51.475000 audit: BPF prog-id=209 op=LOAD Jan 23 18:28:51.475000 audit[4497]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4481 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230306332633438333734323366646133646664393938383961313536 Jan 23 18:28:51.483483 systemd-resolved[1288]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:28:51.512000 audit: BPF prog-id=210 op=LOAD Jan 23 18:28:51.512000 audit[4583]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcbfaa56e0 a2=98 a3=0 items=0 ppid=4204 pid=4583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.512000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:28:51.512000 audit: BPF prog-id=210 op=UNLOAD Jan 23 18:28:51.512000 audit[4583]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcbfaa56b0 a3=0 items=0 ppid=4204 pid=4583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.512000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:28:51.514000 audit: BPF prog-id=211 op=LOAD Jan 23 18:28:51.514000 audit[4583]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcbfaa54f0 a2=94 a3=54428f items=0 ppid=4204 pid=4583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.514000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:28:51.514000 audit: BPF prog-id=211 op=UNLOAD Jan 23 18:28:51.514000 audit[4583]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcbfaa54f0 a2=94 a3=54428f items=0 ppid=4204 pid=4583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.514000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:28:51.516347 containerd[1632]: 2026-01-23 18:28:50.687 [INFO][4391] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--x2dz4-eth0 coredns-674b8bbfcf- kube-system 06cbbdec-0484-47bb-b6b9-aae05580b8cd 880 0 2026-01-23 18:27:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-x2dz4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali511baef1a10 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab" Namespace="kube-system" Pod="coredns-674b8bbfcf-x2dz4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--x2dz4-" Jan 23 18:28:51.516347 containerd[1632]: 2026-01-23 18:28:50.691 [INFO][4391] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab" Namespace="kube-system" Pod="coredns-674b8bbfcf-x2dz4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--x2dz4-eth0" Jan 23 18:28:51.516347 containerd[1632]: 2026-01-23 18:28:50.891 [INFO][4425] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab" HandleID="k8s-pod-network.09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab" Workload="localhost-k8s-coredns--674b8bbfcf--x2dz4-eth0" Jan 23 18:28:51.516347 containerd[1632]: 2026-01-23 18:28:50.893 [INFO][4425] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab" HandleID="k8s-pod-network.09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab" Workload="localhost-k8s-coredns--674b8bbfcf--x2dz4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f140), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-x2dz4", "timestamp":"2026-01-23 18:28:50.891281004 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:28:51.516347 containerd[1632]: 2026-01-23 18:28:50.895 [INFO][4425] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:28:51.516347 containerd[1632]: 2026-01-23 18:28:51.139 [INFO][4425] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:28:51.516347 containerd[1632]: 2026-01-23 18:28:51.146 [INFO][4425] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:28:51.516347 containerd[1632]: 2026-01-23 18:28:51.176 [INFO][4425] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab" host="localhost" Jan 23 18:28:51.516347 containerd[1632]: 2026-01-23 18:28:51.200 [INFO][4425] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:28:51.516347 containerd[1632]: 2026-01-23 18:28:51.292 [INFO][4425] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:28:51.516347 containerd[1632]: 2026-01-23 18:28:51.302 [INFO][4425] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:28:51.516347 containerd[1632]: 2026-01-23 18:28:51.359 [INFO][4425] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:28:51.516347 containerd[1632]: 2026-01-23 18:28:51.359 [INFO][4425] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab" host="localhost" Jan 23 18:28:51.516347 containerd[1632]: 2026-01-23 18:28:51.367 [INFO][4425] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab Jan 23 18:28:51.516347 containerd[1632]: 2026-01-23 18:28:51.382 [INFO][4425] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab" host="localhost" Jan 23 18:28:51.516347 containerd[1632]: 2026-01-23 18:28:51.411 [INFO][4425] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab" host="localhost" Jan 23 18:28:51.516347 containerd[1632]: 2026-01-23 18:28:51.416 [INFO][4425] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab" host="localhost" Jan 23 18:28:51.516347 containerd[1632]: 2026-01-23 18:28:51.418 [INFO][4425] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:28:51.516347 containerd[1632]: 2026-01-23 18:28:51.418 [INFO][4425] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab" HandleID="k8s-pod-network.09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab" Workload="localhost-k8s-coredns--674b8bbfcf--x2dz4-eth0" Jan 23 18:28:51.516899 containerd[1632]: 2026-01-23 18:28:51.440 [INFO][4391] cni-plugin/k8s.go 418: Populated endpoint ContainerID="09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab" Namespace="kube-system" Pod="coredns-674b8bbfcf-x2dz4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--x2dz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--x2dz4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"06cbbdec-0484-47bb-b6b9-aae05580b8cd", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 27, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-x2dz4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali511baef1a10", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:28:51.516899 containerd[1632]: 2026-01-23 18:28:51.440 [INFO][4391] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab" Namespace="kube-system" Pod="coredns-674b8bbfcf-x2dz4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--x2dz4-eth0" Jan 23 18:28:51.516899 containerd[1632]: 2026-01-23 18:28:51.440 [INFO][4391] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali511baef1a10 ContainerID="09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab" Namespace="kube-system" Pod="coredns-674b8bbfcf-x2dz4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--x2dz4-eth0" Jan 23 18:28:51.516899 containerd[1632]: 2026-01-23 18:28:51.466 [INFO][4391] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab" Namespace="kube-system" Pod="coredns-674b8bbfcf-x2dz4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--x2dz4-eth0" Jan 23 18:28:51.516899 containerd[1632]: 2026-01-23 18:28:51.470 [INFO][4391] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab" Namespace="kube-system" Pod="coredns-674b8bbfcf-x2dz4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--x2dz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--x2dz4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"06cbbdec-0484-47bb-b6b9-aae05580b8cd", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 27, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab", Pod:"coredns-674b8bbfcf-x2dz4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali511baef1a10", MAC:"d6:95:7e:a8:fe:d9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:28:51.516899 containerd[1632]: 2026-01-23 18:28:51.503 [INFO][4391] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab" Namespace="kube-system" Pod="coredns-674b8bbfcf-x2dz4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--x2dz4-eth0" Jan 23 18:28:51.514000 audit: BPF prog-id=212 op=LOAD Jan 23 18:28:51.514000 audit[4583]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcbfaa5520 a2=94 a3=2 items=0 ppid=4204 pid=4583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.514000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:28:51.514000 audit: BPF prog-id=212 op=UNLOAD Jan 23 18:28:51.514000 audit[4583]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcbfaa5520 a2=0 a3=2 items=0 ppid=4204 pid=4583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.514000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:28:51.514000 audit: BPF prog-id=213 op=LOAD Jan 23 18:28:51.514000 audit[4583]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcbfaa52d0 a2=94 a3=4 items=0 ppid=4204 pid=4583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.514000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:28:51.514000 audit: BPF prog-id=213 op=UNLOAD Jan 23 18:28:51.514000 audit[4583]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcbfaa52d0 a2=94 a3=4 items=0 ppid=4204 pid=4583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.514000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:28:51.514000 audit: BPF prog-id=214 op=LOAD Jan 23 18:28:51.514000 audit[4583]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcbfaa53d0 a2=94 a3=7ffcbfaa5550 items=0 ppid=4204 pid=4583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.514000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:28:51.515000 audit: BPF prog-id=214 op=UNLOAD Jan 23 18:28:51.515000 audit[4583]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcbfaa53d0 a2=0 a3=7ffcbfaa5550 items=0 ppid=4204 pid=4583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.515000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:28:51.516000 audit: BPF prog-id=215 op=LOAD Jan 23 18:28:51.516000 audit[4583]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcbfaa4b00 a2=94 a3=2 items=0 ppid=4204 pid=4583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.516000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:28:51.516000 audit: BPF prog-id=215 op=UNLOAD Jan 23 18:28:51.516000 audit[4583]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcbfaa4b00 a2=0 a3=2 items=0 ppid=4204 pid=4583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.516000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:28:51.516000 audit: BPF prog-id=216 op=LOAD Jan 23 18:28:51.516000 audit[4583]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcbfaa4c00 a2=94 a3=30 items=0 ppid=4204 pid=4583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.516000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:28:51.531688 systemd[1]: Started cri-containerd-2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a.scope - libcontainer container 2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a. Jan 23 18:28:51.534000 audit: BPF prog-id=217 op=LOAD Jan 23 18:28:51.534000 audit[4592]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcf5c8fac0 a2=98 a3=0 items=0 ppid=4204 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.534000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:28:51.534000 audit: BPF prog-id=217 op=UNLOAD Jan 23 18:28:51.534000 audit[4592]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcf5c8fa90 a3=0 items=0 ppid=4204 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.534000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:28:51.536000 audit: BPF prog-id=218 op=LOAD Jan 23 18:28:51.536000 audit[4592]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcf5c8f8b0 a2=94 a3=54428f items=0 ppid=4204 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.536000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:28:51.536000 audit: BPF prog-id=218 op=UNLOAD Jan 23 18:28:51.536000 audit[4592]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcf5c8f8b0 a2=94 a3=54428f items=0 ppid=4204 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.536000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:28:51.537000 audit: BPF prog-id=219 op=LOAD Jan 23 18:28:51.537000 audit[4592]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcf5c8f8e0 a2=94 a3=2 items=0 ppid=4204 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.537000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:28:51.537000 audit: BPF prog-id=219 op=UNLOAD Jan 23 18:28:51.537000 audit[4592]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcf5c8f8e0 a2=0 a3=2 items=0 ppid=4204 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.537000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:28:51.600654 containerd[1632]: time="2026-01-23T18:28:51.600369473Z" level=info msg="connecting to shim 09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab" address="unix:///run/containerd/s/9b3d94e68e13e85d73318100495f5ed10e839ab821a5f47939bde55a43939167" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:28:51.613000 audit: BPF prog-id=220 op=LOAD Jan 23 18:28:51.618000 audit: BPF prog-id=221 op=LOAD Jan 23 18:28:51.618000 audit[4539]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4518 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261323633386664313639333862393665383038663535653462383230 Jan 23 18:28:51.618000 audit: BPF prog-id=221 op=UNLOAD Jan 23 18:28:51.618000 audit[4539]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4518 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261323633386664313639333862393665383038663535653462383230 Jan 23 18:28:51.624000 audit: BPF prog-id=222 op=LOAD Jan 23 18:28:51.624000 audit[4539]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4518 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261323633386664313639333862393665383038663535653462383230 Jan 23 18:28:51.624000 audit: BPF prog-id=223 op=LOAD Jan 23 18:28:51.624000 audit[4539]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4518 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261323633386664313639333862393665383038663535653462383230 Jan 23 18:28:51.624000 audit: BPF prog-id=223 op=UNLOAD Jan 23 18:28:51.624000 audit[4539]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4518 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261323633386664313639333862393665383038663535653462383230 Jan 23 18:28:51.624000 audit: BPF prog-id=222 op=UNLOAD Jan 23 18:28:51.624000 audit[4539]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4518 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261323633386664313639333862393665383038663535653462383230 Jan 23 18:28:51.624000 audit: BPF prog-id=224 op=LOAD Jan 23 18:28:51.624000 audit[4539]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4518 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261323633386664313639333862393665383038663535653462383230 Jan 23 18:28:51.627860 systemd-resolved[1288]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:28:51.691539 containerd[1632]: time="2026-01-23T18:28:51.691270511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6469486c9-vgp5c,Uid:45c79f90-5bfc-4e7b-ac61-b9e42301e7a5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"200c2c4837423fda3dfd99889a15602b1d5c7d9dfad05bde0579fb25a96c44e9\"" Jan 23 18:28:51.711549 containerd[1632]: time="2026-01-23T18:28:51.711476843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:28:51.712737 systemd[1]: Started cri-containerd-09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab.scope - libcontainer container 09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab. Jan 23 18:28:51.756000 audit: BPF prog-id=225 op=LOAD Jan 23 18:28:51.758000 audit: BPF prog-id=226 op=LOAD Jan 23 18:28:51.758000 audit[4639]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4621 pid=4639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039623735666437333062623262653662376630663438383066336134 Jan 23 18:28:51.759000 audit: BPF prog-id=226 op=UNLOAD Jan 23 18:28:51.759000 audit[4639]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4621 pid=4639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039623735666437333062623262653662376630663438383066336134 Jan 23 18:28:51.760000 audit: BPF prog-id=227 op=LOAD Jan 23 18:28:51.760000 audit[4639]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4621 pid=4639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039623735666437333062623262653662376630663438383066336134 Jan 23 18:28:51.760000 audit: BPF prog-id=228 op=LOAD Jan 23 18:28:51.760000 audit[4639]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4621 pid=4639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039623735666437333062623262653662376630663438383066336134 Jan 23 18:28:51.760000 audit: BPF prog-id=228 op=UNLOAD Jan 23 18:28:51.760000 audit[4639]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4621 pid=4639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039623735666437333062623262653662376630663438383066336134 Jan 23 18:28:51.761000 audit: BPF prog-id=227 op=UNLOAD Jan 23 18:28:51.761000 audit[4639]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4621 pid=4639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039623735666437333062623262653662376630663438383066336134 Jan 23 18:28:51.761000 audit: BPF prog-id=229 op=LOAD Jan 23 18:28:51.761000 audit[4639]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4621 pid=4639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:51.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039623735666437333062623262653662376630663438383066336134 Jan 23 18:28:51.765945 systemd-resolved[1288]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:28:51.794440 containerd[1632]: time="2026-01-23T18:28:51.790672178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4j94s,Uid:308cfc11-36f8-46bc-bb62-85fc8219ee01,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a\"" Jan 23 18:28:51.794572 kubelet[2842]: E0123 18:28:51.792665 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:51.794733 containerd[1632]: time="2026-01-23T18:28:51.794711534Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:28:51.804271 containerd[1632]: time="2026-01-23T18:28:51.801555945Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:28:51.804271 containerd[1632]: time="2026-01-23T18:28:51.801715832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:28:51.805522 containerd[1632]: time="2026-01-23T18:28:51.804648834Z" level=info msg="CreateContainer within sandbox \"2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 18:28:51.805568 kubelet[2842]: E0123 18:28:51.802052 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:28:51.805568 kubelet[2842]: E0123 18:28:51.802095 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:28:51.805568 kubelet[2842]: E0123 18:28:51.802614 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qldmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6469486c9-vgp5c_calico-apiserver(45c79f90-5bfc-4e7b-ac61-b9e42301e7a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:28:51.805568 kubelet[2842]: E0123 18:28:51.803940 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6469486c9-vgp5c" podUID="45c79f90-5bfc-4e7b-ac61-b9e42301e7a5" Jan 23 18:28:51.842967 containerd[1632]: time="2026-01-23T18:28:51.842844439Z" level=info msg="Container 5e4016155fe0a4ee4a03a54b77ff0ec905aba7cb5b1c2a8ecffa74f8a1cc370c: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:28:51.875491 containerd[1632]: time="2026-01-23T18:28:51.871352176Z" level=info msg="CreateContainer within sandbox \"2a2638fd16938b96e808f55e4b820d9563909cc6764352a3d224900ce082912a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5e4016155fe0a4ee4a03a54b77ff0ec905aba7cb5b1c2a8ecffa74f8a1cc370c\"" Jan 23 18:28:51.875491 containerd[1632]: time="2026-01-23T18:28:51.873896543Z" level=info msg="StartContainer for \"5e4016155fe0a4ee4a03a54b77ff0ec905aba7cb5b1c2a8ecffa74f8a1cc370c\"" Jan 23 18:28:51.881988 containerd[1632]: time="2026-01-23T18:28:51.881910890Z" level=info msg="connecting to shim 5e4016155fe0a4ee4a03a54b77ff0ec905aba7cb5b1c2a8ecffa74f8a1cc370c" address="unix:///run/containerd/s/76a9f6fd0817544cfe0336bfb2dc35557b3abd810133ffc625bf6a79fa050607" protocol=ttrpc version=3 Jan 23 18:28:51.911812 systemd-networkd[1517]: calia219162f49b: Link UP Jan 23 18:28:51.914497 systemd-networkd[1517]: calia219162f49b: Gained carrier Jan 23 18:28:51.926117 containerd[1632]: time="2026-01-23T18:28:51.926082191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x2dz4,Uid:06cbbdec-0484-47bb-b6b9-aae05580b8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab\"" Jan 23 18:28:51.929621 kubelet[2842]: E0123 18:28:51.929596 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:51.932689 systemd[1]: Started cri-containerd-5e4016155fe0a4ee4a03a54b77ff0ec905aba7cb5b1c2a8ecffa74f8a1cc370c.scope - libcontainer container 5e4016155fe0a4ee4a03a54b77ff0ec905aba7cb5b1c2a8ecffa74f8a1cc370c. Jan 23 18:28:51.938791 containerd[1632]: time="2026-01-23T18:28:51.938757776Z" level=info msg="CreateContainer within sandbox \"09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 18:28:52.000905 containerd[1632]: time="2026-01-23T18:28:51.994176335Z" level=info msg="Container 22dcb06e61e561c8049d763f8e4661ff15f1095e92a27e729b5c7bef601677db: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:28:52.041000 audit: BPF prog-id=230 op=LOAD Jan 23 18:28:52.041000 audit[4592]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcf5c8f7a0 a2=94 a3=1 items=0 ppid=4204 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.041000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:28:52.041000 audit: BPF prog-id=230 op=UNLOAD Jan 23 18:28:52.041000 audit[4592]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcf5c8f7a0 a2=94 a3=1 items=0 ppid=4204 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.041000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:28:52.064299 containerd[1632]: 2026-01-23 18:28:51.611 [INFO][4546] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--fbkc5-eth0 csi-node-driver- calico-system 420164f1-10e4-4309-843a-9bf4c7513aff 777 0 2026-01-23 18:28:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-fbkc5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia219162f49b [] [] }} ContainerID="f11ee2ef654d5a753e2c9609be91b590fa6b12b4816bd0c481c32df3216ccfc2" Namespace="calico-system" Pod="csi-node-driver-fbkc5" WorkloadEndpoint="localhost-k8s-csi--node--driver--fbkc5-" Jan 23 18:28:52.064299 containerd[1632]: 2026-01-23 18:28:51.612 [INFO][4546] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f11ee2ef654d5a753e2c9609be91b590fa6b12b4816bd0c481c32df3216ccfc2" Namespace="calico-system" Pod="csi-node-driver-fbkc5" WorkloadEndpoint="localhost-k8s-csi--node--driver--fbkc5-eth0" Jan 23 18:28:52.064299 containerd[1632]: 2026-01-23 18:28:51.758 [INFO][4632] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f11ee2ef654d5a753e2c9609be91b590fa6b12b4816bd0c481c32df3216ccfc2" HandleID="k8s-pod-network.f11ee2ef654d5a753e2c9609be91b590fa6b12b4816bd0c481c32df3216ccfc2" Workload="localhost-k8s-csi--node--driver--fbkc5-eth0" Jan 23 18:28:52.064299 containerd[1632]: 2026-01-23 18:28:51.759 [INFO][4632] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f11ee2ef654d5a753e2c9609be91b590fa6b12b4816bd0c481c32df3216ccfc2" HandleID="k8s-pod-network.f11ee2ef654d5a753e2c9609be91b590fa6b12b4816bd0c481c32df3216ccfc2" Workload="localhost-k8s-csi--node--driver--fbkc5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df000), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-fbkc5", "timestamp":"2026-01-23 18:28:51.758659429 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:28:52.064299 containerd[1632]: 2026-01-23 18:28:51.759 [INFO][4632] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:28:52.064299 containerd[1632]: 2026-01-23 18:28:51.759 [INFO][4632] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:28:52.064299 containerd[1632]: 2026-01-23 18:28:51.760 [INFO][4632] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:28:52.064299 containerd[1632]: 2026-01-23 18:28:51.785 [INFO][4632] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f11ee2ef654d5a753e2c9609be91b590fa6b12b4816bd0c481c32df3216ccfc2" host="localhost" Jan 23 18:28:52.064299 containerd[1632]: 2026-01-23 18:28:51.804 [INFO][4632] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:28:52.064299 containerd[1632]: 2026-01-23 18:28:51.830 [INFO][4632] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:28:52.064299 containerd[1632]: 2026-01-23 18:28:51.841 [INFO][4632] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:28:52.064299 containerd[1632]: 2026-01-23 18:28:51.852 [INFO][4632] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:28:52.064299 containerd[1632]: 2026-01-23 18:28:51.854 [INFO][4632] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f11ee2ef654d5a753e2c9609be91b590fa6b12b4816bd0c481c32df3216ccfc2" host="localhost" Jan 23 18:28:52.064299 containerd[1632]: 2026-01-23 18:28:51.859 [INFO][4632] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f11ee2ef654d5a753e2c9609be91b590fa6b12b4816bd0c481c32df3216ccfc2 Jan 23 18:28:52.064299 containerd[1632]: 2026-01-23 18:28:51.878 [INFO][4632] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f11ee2ef654d5a753e2c9609be91b590fa6b12b4816bd0c481c32df3216ccfc2" host="localhost" Jan 23 18:28:52.064299 containerd[1632]: 2026-01-23 18:28:51.889 [INFO][4632] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f11ee2ef654d5a753e2c9609be91b590fa6b12b4816bd0c481c32df3216ccfc2" host="localhost" Jan 23 18:28:52.064299 containerd[1632]: 2026-01-23 18:28:51.890 [INFO][4632] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f11ee2ef654d5a753e2c9609be91b590fa6b12b4816bd0c481c32df3216ccfc2" host="localhost" Jan 23 18:28:52.064299 containerd[1632]: 2026-01-23 18:28:51.890 [INFO][4632] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:28:52.064299 containerd[1632]: 2026-01-23 18:28:51.890 [INFO][4632] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f11ee2ef654d5a753e2c9609be91b590fa6b12b4816bd0c481c32df3216ccfc2" HandleID="k8s-pod-network.f11ee2ef654d5a753e2c9609be91b590fa6b12b4816bd0c481c32df3216ccfc2" Workload="localhost-k8s-csi--node--driver--fbkc5-eth0" Jan 23 18:28:52.065245 containerd[1632]: 2026-01-23 18:28:51.900 [INFO][4546] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f11ee2ef654d5a753e2c9609be91b590fa6b12b4816bd0c481c32df3216ccfc2" Namespace="calico-system" Pod="csi-node-driver-fbkc5" WorkloadEndpoint="localhost-k8s-csi--node--driver--fbkc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fbkc5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"420164f1-10e4-4309-843a-9bf4c7513aff", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 28, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-fbkc5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia219162f49b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:28:52.065245 containerd[1632]: 2026-01-23 18:28:51.901 [INFO][4546] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f11ee2ef654d5a753e2c9609be91b590fa6b12b4816bd0c481c32df3216ccfc2" Namespace="calico-system" Pod="csi-node-driver-fbkc5" WorkloadEndpoint="localhost-k8s-csi--node--driver--fbkc5-eth0" Jan 23 18:28:52.065245 containerd[1632]: 2026-01-23 18:28:51.901 [INFO][4546] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia219162f49b ContainerID="f11ee2ef654d5a753e2c9609be91b590fa6b12b4816bd0c481c32df3216ccfc2" Namespace="calico-system" Pod="csi-node-driver-fbkc5" WorkloadEndpoint="localhost-k8s-csi--node--driver--fbkc5-eth0" Jan 23 18:28:52.065245 containerd[1632]: 2026-01-23 18:28:51.926 [INFO][4546] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f11ee2ef654d5a753e2c9609be91b590fa6b12b4816bd0c481c32df3216ccfc2" Namespace="calico-system" Pod="csi-node-driver-fbkc5" WorkloadEndpoint="localhost-k8s-csi--node--driver--fbkc5-eth0" Jan 23 18:28:52.065245 containerd[1632]: 2026-01-23 18:28:51.927 [INFO][4546] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f11ee2ef654d5a753e2c9609be91b590fa6b12b4816bd0c481c32df3216ccfc2" Namespace="calico-system" Pod="csi-node-driver-fbkc5" WorkloadEndpoint="localhost-k8s-csi--node--driver--fbkc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fbkc5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"420164f1-10e4-4309-843a-9bf4c7513aff", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 28, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f11ee2ef654d5a753e2c9609be91b590fa6b12b4816bd0c481c32df3216ccfc2", Pod:"csi-node-driver-fbkc5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia219162f49b", MAC:"a6:87:08:55:2f:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:28:52.065245 containerd[1632]: 2026-01-23 18:28:52.019 [INFO][4546] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f11ee2ef654d5a753e2c9609be91b590fa6b12b4816bd0c481c32df3216ccfc2" Namespace="calico-system" Pod="csi-node-driver-fbkc5" WorkloadEndpoint="localhost-k8s-csi--node--driver--fbkc5-eth0" Jan 23 18:28:52.068269 containerd[1632]: time="2026-01-23T18:28:52.068174788Z" level=info msg="CreateContainer within sandbox \"09b75fd730bb2be6b7f0f4880f3a4ff793739542f5635496968737ac5f365eab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"22dcb06e61e561c8049d763f8e4661ff15f1095e92a27e729b5c7bef601677db\"" Jan 23 18:28:52.070510 containerd[1632]: time="2026-01-23T18:28:52.069582004Z" level=info msg="StartContainer for \"22dcb06e61e561c8049d763f8e4661ff15f1095e92a27e729b5c7bef601677db\"" Jan 23 18:28:52.076211 containerd[1632]: time="2026-01-23T18:28:52.076182002Z" level=info msg="connecting to shim 22dcb06e61e561c8049d763f8e4661ff15f1095e92a27e729b5c7bef601677db" address="unix:///run/containerd/s/9b3d94e68e13e85d73318100495f5ed10e839ab821a5f47939bde55a43939167" protocol=ttrpc version=3 Jan 23 18:28:52.078618 kubelet[2842]: E0123 18:28:52.078585 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6469486c9-vgp5c" podUID="45c79f90-5bfc-4e7b-ac61-b9e42301e7a5" Jan 23 18:28:52.081000 audit: BPF prog-id=231 op=LOAD Jan 23 18:28:52.083675 kubelet[2842]: E0123 18:28:52.083504 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59cc6d476d-zc49f" podUID="b1a5247f-3dd0-4a60-b451-df40ad40b033" Jan 23 18:28:52.086531 kernel: kauditd_printk_skb: 274 callbacks suppressed Jan 23 18:28:52.086594 kernel: audit: type=1334 audit(1769192932.081:682): prog-id=231 op=LOAD Jan 23 18:28:52.093354 kernel: audit: type=1334 audit(1769192932.083:683): prog-id=232 op=LOAD Jan 23 18:28:52.083000 audit: BPF prog-id=232 op=LOAD Jan 23 18:28:52.083000 audit[4695]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4518 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.108626 kernel: audit: type=1300 audit(1769192932.083:683): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4518 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.108683 kernel: audit: type=1327 audit(1769192932.083:683): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565343031363135356665306134656534613033613534623737666630 Jan 23 18:28:52.083000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565343031363135356665306134656534613033613534623737666630 Jan 23 18:28:52.085000 audit: BPF prog-id=232 op=UNLOAD Jan 23 18:28:52.127897 kernel: audit: type=1334 audit(1769192932.085:684): prog-id=232 op=UNLOAD Jan 23 18:28:52.128304 kernel: audit: type=1300 audit(1769192932.085:684): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4518 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.085000 audit[4695]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4518 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565343031363135356665306134656534613033613534623737666630 Jan 23 18:28:52.180056 kernel: audit: type=1327 audit(1769192932.085:684): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565343031363135356665306134656534613033613534623737666630 Jan 23 18:28:52.088000 audit: BPF prog-id=233 op=LOAD Jan 23 18:28:52.207220 kernel: audit: type=1334 audit(1769192932.088:685): prog-id=233 op=LOAD Jan 23 18:28:52.207796 kernel: audit: type=1300 audit(1769192932.088:685): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4518 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.088000 audit[4695]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4518 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.235730 kernel: audit: type=1327 audit(1769192932.088:685): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565343031363135356665306134656534613033613534623737666630 Jan 23 18:28:52.088000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565343031363135356665306134656534613033613534623737666630 Jan 23 18:28:52.090000 audit: BPF prog-id=234 op=LOAD Jan 23 18:28:52.090000 audit[4695]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4518 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565343031363135356665306134656534613033613534623737666630 Jan 23 18:28:52.090000 audit: BPF prog-id=234 op=UNLOAD Jan 23 18:28:52.090000 audit[4695]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4518 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565343031363135356665306134656534613033613534623737666630 Jan 23 18:28:52.090000 audit: BPF prog-id=233 op=UNLOAD Jan 23 18:28:52.090000 audit[4695]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4518 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565343031363135356665306134656534613033613534623737666630 Jan 23 18:28:52.091000 audit: BPF prog-id=235 op=LOAD Jan 23 18:28:52.091000 audit[4592]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcf5c8f790 a2=94 a3=4 items=0 ppid=4204 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.091000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:28:52.093000 audit: BPF prog-id=235 op=UNLOAD Jan 23 18:28:52.093000 audit[4592]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcf5c8f790 a2=0 a3=4 items=0 ppid=4204 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.093000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:28:52.094000 audit: BPF prog-id=236 op=LOAD Jan 23 18:28:52.094000 audit[4592]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcf5c8f5f0 a2=94 a3=5 items=0 ppid=4204 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.094000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:28:52.094000 audit: BPF prog-id=236 op=UNLOAD Jan 23 18:28:52.094000 audit[4592]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcf5c8f5f0 a2=0 a3=5 items=0 ppid=4204 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.094000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:28:52.095000 audit: BPF prog-id=237 op=LOAD Jan 23 18:28:52.095000 audit[4592]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcf5c8f810 a2=94 a3=6 items=0 ppid=4204 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.095000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:28:52.095000 audit: BPF prog-id=237 op=UNLOAD Jan 23 18:28:52.095000 audit[4592]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcf5c8f810 a2=0 a3=6 items=0 ppid=4204 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.095000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:28:52.092000 audit: BPF prog-id=238 op=LOAD Jan 23 18:28:52.092000 audit[4695]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4518 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.092000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565343031363135356665306134656534613033613534623737666630 Jan 23 18:28:52.101000 audit: BPF prog-id=239 op=LOAD Jan 23 18:28:52.101000 audit[4592]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcf5c8efc0 a2=94 a3=88 items=0 ppid=4204 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.101000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:28:52.103000 audit: BPF prog-id=240 op=LOAD Jan 23 18:28:52.103000 audit[4592]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffcf5c8ee40 a2=94 a3=2 items=0 ppid=4204 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.103000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:28:52.103000 audit: BPF prog-id=240 op=UNLOAD Jan 23 18:28:52.103000 audit[4592]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffcf5c8ee70 a2=0 a3=7ffcf5c8ef70 items=0 ppid=4204 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.103000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:28:52.105000 audit: BPF prog-id=239 op=UNLOAD Jan 23 18:28:52.105000 audit[4592]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1e57d10 a2=0 a3=7cf0acfb1f0ced03 items=0 ppid=4204 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.105000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:28:52.280827 systemd-networkd[1517]: cali92a500cf653: Link UP Jan 23 18:28:52.282093 systemd-networkd[1517]: cali92a500cf653: Gained carrier Jan 23 18:28:52.336057 systemd[1]: Started cri-containerd-22dcb06e61e561c8049d763f8e4661ff15f1095e92a27e729b5c7bef601677db.scope - libcontainer container 22dcb06e61e561c8049d763f8e4661ff15f1095e92a27e729b5c7bef601677db. Jan 23 18:28:52.335000 audit: BPF prog-id=216 op=UNLOAD Jan 23 18:28:52.336000 audit[4746]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4746 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:52.336000 audit[4746]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe2d2ff6b0 a2=0 a3=7ffe2d2ff69c items=0 ppid=3005 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.336000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:52.335000 audit[4204]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0004861c0 a2=0 a3=0 items=0 ppid=4195 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.335000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 23 18:28:52.346000 audit[4746]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4746 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:52.346000 audit[4746]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe2d2ff6b0 a2=0 a3=0 items=0 ppid=3005 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.346000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:52.355310 containerd[1632]: time="2026-01-23T18:28:52.355184878Z" level=info msg="connecting to shim f11ee2ef654d5a753e2c9609be91b590fa6b12b4816bd0c481c32df3216ccfc2" address="unix:///run/containerd/s/6cb19c3f7803df3e8ba5c4773856a8f92da6389db29c9debecce39e3123f12b9" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:28:52.362739 containerd[1632]: 2026-01-23 18:28:51.717 [INFO][4551] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f6cd769bc--pxzdx-eth0 calico-apiserver-5f6cd769bc- calico-apiserver 7bfb42fc-77fc-4491-a374-12534b8ba3b1 892 0 2026-01-23 18:28:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f6cd769bc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f6cd769bc-pxzdx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali92a500cf653 [] [] }} ContainerID="a332fc140e181f79b6bf6d5959819f6a375a4180ba1551b3f48ba547018454e1" Namespace="calico-apiserver" Pod="calico-apiserver-5f6cd769bc-pxzdx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f6cd769bc--pxzdx-" Jan 23 18:28:52.362739 containerd[1632]: 2026-01-23 18:28:51.717 [INFO][4551] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a332fc140e181f79b6bf6d5959819f6a375a4180ba1551b3f48ba547018454e1" Namespace="calico-apiserver" Pod="calico-apiserver-5f6cd769bc-pxzdx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f6cd769bc--pxzdx-eth0" Jan 23 18:28:52.362739 containerd[1632]: 2026-01-23 18:28:51.867 [INFO][4674] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a332fc140e181f79b6bf6d5959819f6a375a4180ba1551b3f48ba547018454e1" HandleID="k8s-pod-network.a332fc140e181f79b6bf6d5959819f6a375a4180ba1551b3f48ba547018454e1" Workload="localhost-k8s-calico--apiserver--5f6cd769bc--pxzdx-eth0" Jan 23 18:28:52.362739 containerd[1632]: 2026-01-23 18:28:51.867 [INFO][4674] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a332fc140e181f79b6bf6d5959819f6a375a4180ba1551b3f48ba547018454e1" HandleID="k8s-pod-network.a332fc140e181f79b6bf6d5959819f6a375a4180ba1551b3f48ba547018454e1" Workload="localhost-k8s-calico--apiserver--5f6cd769bc--pxzdx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fc20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f6cd769bc-pxzdx", "timestamp":"2026-01-23 18:28:51.867711543 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:28:52.362739 containerd[1632]: 2026-01-23 18:28:51.867 [INFO][4674] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:28:52.362739 containerd[1632]: 2026-01-23 18:28:51.890 [INFO][4674] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:28:52.362739 containerd[1632]: 2026-01-23 18:28:51.891 [INFO][4674] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:28:52.362739 containerd[1632]: 2026-01-23 18:28:51.910 [INFO][4674] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a332fc140e181f79b6bf6d5959819f6a375a4180ba1551b3f48ba547018454e1" host="localhost" Jan 23 18:28:52.362739 containerd[1632]: 2026-01-23 18:28:51.921 [INFO][4674] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:28:52.362739 containerd[1632]: 2026-01-23 18:28:52.007 [INFO][4674] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:28:52.362739 containerd[1632]: 2026-01-23 18:28:52.021 [INFO][4674] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:28:52.362739 containerd[1632]: 2026-01-23 18:28:52.025 [INFO][4674] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:28:52.362739 containerd[1632]: 2026-01-23 18:28:52.026 [INFO][4674] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a332fc140e181f79b6bf6d5959819f6a375a4180ba1551b3f48ba547018454e1" host="localhost" Jan 23 18:28:52.362739 containerd[1632]: 2026-01-23 18:28:52.066 [INFO][4674] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a332fc140e181f79b6bf6d5959819f6a375a4180ba1551b3f48ba547018454e1 Jan 23 18:28:52.362739 containerd[1632]: 2026-01-23 18:28:52.085 [INFO][4674] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a332fc140e181f79b6bf6d5959819f6a375a4180ba1551b3f48ba547018454e1" host="localhost" Jan 23 18:28:52.362739 containerd[1632]: 2026-01-23 18:28:52.118 [INFO][4674] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.a332fc140e181f79b6bf6d5959819f6a375a4180ba1551b3f48ba547018454e1" host="localhost" Jan 23 18:28:52.362739 containerd[1632]: 2026-01-23 18:28:52.120 [INFO][4674] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.a332fc140e181f79b6bf6d5959819f6a375a4180ba1551b3f48ba547018454e1" host="localhost" Jan 23 18:28:52.362739 containerd[1632]: 2026-01-23 18:28:52.120 [INFO][4674] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:28:52.362739 containerd[1632]: 2026-01-23 18:28:52.120 [INFO][4674] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="a332fc140e181f79b6bf6d5959819f6a375a4180ba1551b3f48ba547018454e1" HandleID="k8s-pod-network.a332fc140e181f79b6bf6d5959819f6a375a4180ba1551b3f48ba547018454e1" Workload="localhost-k8s-calico--apiserver--5f6cd769bc--pxzdx-eth0" Jan 23 18:28:52.363799 containerd[1632]: 2026-01-23 18:28:52.135 [INFO][4551] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a332fc140e181f79b6bf6d5959819f6a375a4180ba1551b3f48ba547018454e1" Namespace="calico-apiserver" Pod="calico-apiserver-5f6cd769bc-pxzdx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f6cd769bc--pxzdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f6cd769bc--pxzdx-eth0", GenerateName:"calico-apiserver-5f6cd769bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"7bfb42fc-77fc-4491-a374-12534b8ba3b1", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 28, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f6cd769bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f6cd769bc-pxzdx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92a500cf653", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:28:52.363799 containerd[1632]: 2026-01-23 18:28:52.135 [INFO][4551] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="a332fc140e181f79b6bf6d5959819f6a375a4180ba1551b3f48ba547018454e1" Namespace="calico-apiserver" Pod="calico-apiserver-5f6cd769bc-pxzdx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f6cd769bc--pxzdx-eth0" Jan 23 18:28:52.363799 containerd[1632]: 2026-01-23 18:28:52.136 [INFO][4551] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92a500cf653 ContainerID="a332fc140e181f79b6bf6d5959819f6a375a4180ba1551b3f48ba547018454e1" Namespace="calico-apiserver" Pod="calico-apiserver-5f6cd769bc-pxzdx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f6cd769bc--pxzdx-eth0" Jan 23 18:28:52.363799 containerd[1632]: 2026-01-23 18:28:52.283 [INFO][4551] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a332fc140e181f79b6bf6d5959819f6a375a4180ba1551b3f48ba547018454e1" Namespace="calico-apiserver" Pod="calico-apiserver-5f6cd769bc-pxzdx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f6cd769bc--pxzdx-eth0" Jan 23 18:28:52.363799 containerd[1632]: 2026-01-23 18:28:52.291 [INFO][4551] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a332fc140e181f79b6bf6d5959819f6a375a4180ba1551b3f48ba547018454e1" Namespace="calico-apiserver" Pod="calico-apiserver-5f6cd769bc-pxzdx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f6cd769bc--pxzdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f6cd769bc--pxzdx-eth0", GenerateName:"calico-apiserver-5f6cd769bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"7bfb42fc-77fc-4491-a374-12534b8ba3b1", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 28, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f6cd769bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a332fc140e181f79b6bf6d5959819f6a375a4180ba1551b3f48ba547018454e1", Pod:"calico-apiserver-5f6cd769bc-pxzdx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92a500cf653", MAC:"52:b4:46:0b:79:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:28:52.363799 containerd[1632]: 2026-01-23 18:28:52.339 [INFO][4551] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a332fc140e181f79b6bf6d5959819f6a375a4180ba1551b3f48ba547018454e1" Namespace="calico-apiserver" Pod="calico-apiserver-5f6cd769bc-pxzdx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f6cd769bc--pxzdx-eth0" Jan 23 18:28:52.388145 containerd[1632]: time="2026-01-23T18:28:52.387910345Z" level=info msg="StartContainer for \"5e4016155fe0a4ee4a03a54b77ff0ec905aba7cb5b1c2a8ecffa74f8a1cc370c\" returns successfully" Jan 23 18:28:52.389000 audit: BPF prog-id=241 op=LOAD Jan 23 18:28:52.392000 audit: BPF prog-id=242 op=LOAD Jan 23 18:28:52.392000 audit[4728]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=4621 pid=4728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.392000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232646362303665363165353631633830343964373633663865343636 Jan 23 18:28:52.393000 audit: BPF prog-id=242 op=UNLOAD Jan 23 18:28:52.393000 audit[4728]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4621 pid=4728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.393000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232646362303665363165353631633830343964373633663865343636 Jan 23 18:28:52.394000 audit: BPF prog-id=243 op=LOAD Jan 23 18:28:52.394000 audit[4728]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=4621 pid=4728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232646362303665363165353631633830343964373633663865343636 Jan 23 18:28:52.395000 audit: BPF prog-id=244 op=LOAD Jan 23 18:28:52.395000 audit[4728]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=4621 pid=4728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232646362303665363165353631633830343964373633663865343636 Jan 23 18:28:52.397000 audit: BPF prog-id=244 op=UNLOAD Jan 23 18:28:52.397000 audit[4728]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4621 pid=4728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232646362303665363165353631633830343964373633663865343636 Jan 23 18:28:52.397000 audit: BPF prog-id=243 op=UNLOAD Jan 23 18:28:52.397000 audit[4728]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4621 pid=4728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232646362303665363165353631633830343964373633663865343636 Jan 23 18:28:52.397000 audit: BPF prog-id=245 op=LOAD Jan 23 18:28:52.397000 audit[4728]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=4621 pid=4728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232646362303665363165353631633830343964373633663865343636 Jan 23 18:28:52.428132 containerd[1632]: time="2026-01-23T18:28:52.427911824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6469486c9-qrngm,Uid:fd8c84c1-3db5-46bc-b232-d92330035bbc,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:28:52.443693 containerd[1632]: time="2026-01-23T18:28:52.443647881Z" level=info msg="connecting to shim a332fc140e181f79b6bf6d5959819f6a375a4180ba1551b3f48ba547018454e1" address="unix:///run/containerd/s/abcbccd1dcb090fdf910442b3ee0316afefa82370fa976bdef2adf0ac1bb6279" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:28:52.474803 systemd[1]: Started cri-containerd-f11ee2ef654d5a753e2c9609be91b590fa6b12b4816bd0c481c32df3216ccfc2.scope - libcontainer container f11ee2ef654d5a753e2c9609be91b590fa6b12b4816bd0c481c32df3216ccfc2. Jan 23 18:28:52.532496 systemd-networkd[1517]: calibc58edbb86d: Gained IPv6LL Jan 23 18:28:52.537950 containerd[1632]: time="2026-01-23T18:28:52.536922080Z" level=info msg="StartContainer for \"22dcb06e61e561c8049d763f8e4661ff15f1095e92a27e729b5c7bef601677db\" returns successfully" Jan 23 18:28:52.583000 audit: BPF prog-id=246 op=LOAD Jan 23 18:28:52.584000 audit: BPF prog-id=247 op=LOAD Jan 23 18:28:52.584000 audit[4790]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000274238 a2=98 a3=0 items=0 ppid=4756 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631316565326566363534643561373533653263393630396265393162 Jan 23 18:28:52.584000 audit: BPF prog-id=247 op=UNLOAD Jan 23 18:28:52.584000 audit[4790]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4756 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631316565326566363534643561373533653263393630396265393162 Jan 23 18:28:52.587000 audit: BPF prog-id=248 op=LOAD Jan 23 18:28:52.587000 audit[4790]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000274488 a2=98 a3=0 items=0 ppid=4756 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631316565326566363534643561373533653263393630396265393162 Jan 23 18:28:52.587000 audit: BPF prog-id=249 op=LOAD Jan 23 18:28:52.587000 audit[4790]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000274218 a2=98 a3=0 items=0 ppid=4756 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631316565326566363534643561373533653263393630396265393162 Jan 23 18:28:52.587000 audit: BPF prog-id=249 op=UNLOAD Jan 23 18:28:52.587000 audit[4790]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4756 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631316565326566363534643561373533653263393630396265393162 Jan 23 18:28:52.587000 audit: BPF prog-id=248 op=UNLOAD Jan 23 18:28:52.587000 audit[4790]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4756 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631316565326566363534643561373533653263393630396265393162 Jan 23 18:28:52.587000 audit: BPF prog-id=250 op=LOAD Jan 23 18:28:52.587000 audit[4790]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0002746e8 a2=98 a3=0 items=0 ppid=4756 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631316565326566363534643561373533653263393630396265393162 Jan 23 18:28:52.611249 systemd-resolved[1288]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:28:52.631330 systemd[1]: Started cri-containerd-a332fc140e181f79b6bf6d5959819f6a375a4180ba1551b3f48ba547018454e1.scope - libcontainer container a332fc140e181f79b6bf6d5959819f6a375a4180ba1551b3f48ba547018454e1. Jan 23 18:28:52.665605 systemd-networkd[1517]: calif21c066b152: Gained IPv6LL Jan 23 18:28:52.667583 systemd-networkd[1517]: cali511baef1a10: Gained IPv6LL Jan 23 18:28:52.698362 containerd[1632]: time="2026-01-23T18:28:52.698172528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fbkc5,Uid:420164f1-10e4-4309-843a-9bf4c7513aff,Namespace:calico-system,Attempt:0,} returns sandbox id \"f11ee2ef654d5a753e2c9609be91b590fa6b12b4816bd0c481c32df3216ccfc2\"" Jan 23 18:28:52.704029 containerd[1632]: time="2026-01-23T18:28:52.703800617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:28:52.765000 audit: BPF prog-id=251 op=LOAD Jan 23 18:28:52.765000 audit: BPF prog-id=252 op=LOAD Jan 23 18:28:52.765000 audit[4853]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4811 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.765000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133333266633134306531383166373962366266366435393539383139 Jan 23 18:28:52.766000 audit: BPF prog-id=252 op=UNLOAD Jan 23 18:28:52.766000 audit[4853]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4811 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133333266633134306531383166373962366266366435393539383139 Jan 23 18:28:52.766000 audit: BPF prog-id=253 op=LOAD Jan 23 18:28:52.766000 audit[4853]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4811 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133333266633134306531383166373962366266366435393539383139 Jan 23 18:28:52.766000 audit: BPF prog-id=254 op=LOAD Jan 23 18:28:52.766000 audit[4853]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4811 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133333266633134306531383166373962366266366435393539383139 Jan 23 18:28:52.766000 audit: BPF prog-id=254 op=UNLOAD Jan 23 18:28:52.766000 audit[4853]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4811 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133333266633134306531383166373962366266366435393539383139 Jan 23 18:28:52.766000 audit: BPF prog-id=253 op=UNLOAD Jan 23 18:28:52.766000 audit[4853]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4811 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133333266633134306531383166373962366266366435393539383139 Jan 23 18:28:52.767000 audit: BPF prog-id=255 op=LOAD Jan 23 18:28:52.767000 audit[4853]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4811 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133333266633134306531383166373962366266366435393539383139 Jan 23 18:28:52.775938 systemd-resolved[1288]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:28:52.783000 audit[4916]: NETFILTER_CFG table=nat:123 family=2 entries=15 op=nft_register_chain pid=4916 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:28:52.783000 audit[4916]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fffd02582e0 a2=0 a3=7fffd02582cc items=0 ppid=4204 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.783000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:28:52.784000 audit[4917]: NETFILTER_CFG table=mangle:124 family=2 entries=16 op=nft_register_chain pid=4917 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:28:52.784000 audit[4917]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff7f0d9490 a2=0 a3=7fff7f0d947c items=0 ppid=4204 pid=4917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.784000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:28:52.787869 systemd-networkd[1517]: vxlan.calico: Gained IPv6LL Jan 23 18:28:52.816019 containerd[1632]: time="2026-01-23T18:28:52.812515113Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:28:52.832061 containerd[1632]: time="2026-01-23T18:28:52.831815055Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:28:52.832648 containerd[1632]: time="2026-01-23T18:28:52.831923255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 18:28:52.836896 kubelet[2842]: E0123 18:28:52.833629 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:28:52.836896 kubelet[2842]: E0123 18:28:52.834598 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:28:52.836896 kubelet[2842]: E0123 18:28:52.835258 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-454cg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fbkc5_calico-system(420164f1-10e4-4309-843a-9bf4c7513aff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:28:52.840451 containerd[1632]: time="2026-01-23T18:28:52.840186908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:28:52.852000 audit[4923]: NETFILTER_CFG table=raw:125 family=2 entries=21 op=nft_register_chain pid=4923 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:28:52.852000 audit[4923]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff88cd7960 a2=0 a3=7fff88cd794c items=0 ppid=4204 pid=4923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.852000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:28:52.856000 audit[4919]: NETFILTER_CFG table=filter:126 family=2 entries=170 op=nft_register_chain pid=4919 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:28:52.856000 audit[4919]: SYSCALL arch=c000003e syscall=46 success=yes exit=97952 a0=3 a1=7ffc3c93b710 a2=0 a3=55bb80a9a000 items=0 ppid=4204 pid=4919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:52.856000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:28:52.927911 containerd[1632]: time="2026-01-23T18:28:52.927771256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f6cd769bc-pxzdx,Uid:7bfb42fc-77fc-4491-a374-12534b8ba3b1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a332fc140e181f79b6bf6d5959819f6a375a4180ba1551b3f48ba547018454e1\"" Jan 23 18:28:52.936886 containerd[1632]: time="2026-01-23T18:28:52.936667911Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:28:52.938770 containerd[1632]: time="2026-01-23T18:28:52.938611545Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:28:52.938770 containerd[1632]: time="2026-01-23T18:28:52.938645625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 18:28:52.939177 kubelet[2842]: E0123 18:28:52.939059 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:28:52.939177 kubelet[2842]: E0123 18:28:52.939118 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:28:52.939717 kubelet[2842]: E0123 18:28:52.939497 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-454cg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fbkc5_calico-system(420164f1-10e4-4309-843a-9bf4c7513aff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:28:52.939936 containerd[1632]: time="2026-01-23T18:28:52.939665909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:28:52.941119 kubelet[2842]: E0123 18:28:52.941057 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fbkc5" podUID="420164f1-10e4-4309-843a-9bf4c7513aff" Jan 23 18:28:53.140506 systemd-networkd[1517]: calid596e1bd94e: Link UP Jan 23 18:28:53.198466 systemd-networkd[1517]: calid596e1bd94e: Gained carrier Jan 23 18:28:53.264356 containerd[1632]: time="2026-01-23T18:28:53.262552037Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:28:53.304093 systemd-networkd[1517]: calia219162f49b: Gained IPv6LL Jan 23 18:28:53.404261 containerd[1632]: time="2026-01-23T18:28:53.341047155Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:28:53.505554 containerd[1632]: time="2026-01-23T18:28:53.342149789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:28:53.513735 kubelet[2842]: E0123 18:28:53.513343 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:28:53.516026 kubelet[2842]: E0123 18:28:53.514123 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:28:53.516026 kubelet[2842]: E0123 18:28:53.514637 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g8s7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f6cd769bc-pxzdx_calico-apiserver(7bfb42fc-77fc-4491-a374-12534b8ba3b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:28:53.517072 kubelet[2842]: E0123 18:28:53.516765 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f6cd769bc-pxzdx" podUID="7bfb42fc-77fc-4491-a374-12534b8ba3b1" Jan 23 18:28:53.600165 containerd[1632]: time="2026-01-23T18:28:53.599659026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qrzq4,Uid:f4756753-32cd-49e4-a9ac-3b64d97f5679,Namespace:calico-system,Attempt:0,}" Jan 23 18:28:53.604703 kubelet[2842]: E0123 18:28:53.604679 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:53.617103 kubelet[2842]: E0123 18:28:53.617072 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:53.621528 containerd[1632]: 2026-01-23 18:28:52.673 [INFO][4816] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6469486c9--qrngm-eth0 calico-apiserver-6469486c9- calico-apiserver fd8c84c1-3db5-46bc-b232-d92330035bbc 887 0 2026-01-23 18:28:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6469486c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6469486c9-qrngm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid596e1bd94e [] [] }} ContainerID="82414d03cf2c73724d1bf6f751f7756825846685df5151cec4c8217f998464fc" Namespace="calico-apiserver" Pod="calico-apiserver-6469486c9-qrngm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6469486c9--qrngm-" Jan 23 18:28:53.621528 containerd[1632]: 2026-01-23 18:28:52.674 [INFO][4816] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82414d03cf2c73724d1bf6f751f7756825846685df5151cec4c8217f998464fc" Namespace="calico-apiserver" Pod="calico-apiserver-6469486c9-qrngm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6469486c9--qrngm-eth0" Jan 23 18:28:53.621528 containerd[1632]: 2026-01-23 18:28:52.791 [INFO][4904] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82414d03cf2c73724d1bf6f751f7756825846685df5151cec4c8217f998464fc" HandleID="k8s-pod-network.82414d03cf2c73724d1bf6f751f7756825846685df5151cec4c8217f998464fc" Workload="localhost-k8s-calico--apiserver--6469486c9--qrngm-eth0" Jan 23 18:28:53.621528 containerd[1632]: 2026-01-23 18:28:52.791 [INFO][4904] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="82414d03cf2c73724d1bf6f751f7756825846685df5151cec4c8217f998464fc" HandleID="k8s-pod-network.82414d03cf2c73724d1bf6f751f7756825846685df5151cec4c8217f998464fc" Workload="localhost-k8s-calico--apiserver--6469486c9--qrngm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f3b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6469486c9-qrngm", "timestamp":"2026-01-23 18:28:52.79121955 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:28:53.621528 containerd[1632]: 2026-01-23 18:28:52.791 [INFO][4904] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:28:53.621528 containerd[1632]: 2026-01-23 18:28:52.791 [INFO][4904] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:28:53.621528 containerd[1632]: 2026-01-23 18:28:52.791 [INFO][4904] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:28:53.621528 containerd[1632]: 2026-01-23 18:28:52.844 [INFO][4904] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.82414d03cf2c73724d1bf6f751f7756825846685df5151cec4c8217f998464fc" host="localhost" Jan 23 18:28:53.621528 containerd[1632]: 2026-01-23 18:28:52.865 [INFO][4904] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:28:53.621528 containerd[1632]: 2026-01-23 18:28:52.923 [INFO][4904] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:28:53.621528 containerd[1632]: 2026-01-23 18:28:52.926 [INFO][4904] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:28:53.621528 containerd[1632]: 2026-01-23 18:28:52.930 [INFO][4904] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:28:53.621528 containerd[1632]: 2026-01-23 18:28:52.930 [INFO][4904] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.82414d03cf2c73724d1bf6f751f7756825846685df5151cec4c8217f998464fc" host="localhost" Jan 23 18:28:53.621528 containerd[1632]: 2026-01-23 18:28:52.934 [INFO][4904] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.82414d03cf2c73724d1bf6f751f7756825846685df5151cec4c8217f998464fc Jan 23 18:28:53.621528 containerd[1632]: 2026-01-23 18:28:52.942 [INFO][4904] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.82414d03cf2c73724d1bf6f751f7756825846685df5151cec4c8217f998464fc" host="localhost" Jan 23 18:28:53.621528 containerd[1632]: 2026-01-23 18:28:52.958 [INFO][4904] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.82414d03cf2c73724d1bf6f751f7756825846685df5151cec4c8217f998464fc" host="localhost" Jan 23 18:28:53.621528 containerd[1632]: 2026-01-23 18:28:52.959 [INFO][4904] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.82414d03cf2c73724d1bf6f751f7756825846685df5151cec4c8217f998464fc" host="localhost" Jan 23 18:28:53.621528 containerd[1632]: 2026-01-23 18:28:52.959 [INFO][4904] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:28:53.621528 containerd[1632]: 2026-01-23 18:28:52.959 [INFO][4904] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="82414d03cf2c73724d1bf6f751f7756825846685df5151cec4c8217f998464fc" HandleID="k8s-pod-network.82414d03cf2c73724d1bf6f751f7756825846685df5151cec4c8217f998464fc" Workload="localhost-k8s-calico--apiserver--6469486c9--qrngm-eth0" Jan 23 18:28:53.619000 audit[4944]: NETFILTER_CFG table=filter:127 family=2 entries=127 op=nft_register_chain pid=4944 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:28:53.619000 audit[4944]: SYSCALL arch=c000003e syscall=46 success=yes exit=74456 a0=3 a1=7ffc07868010 a2=0 a3=7ffc07867ffc items=0 ppid=4204 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:53.619000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:28:53.625100 containerd[1632]: 2026-01-23 18:28:52.965 [INFO][4816] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82414d03cf2c73724d1bf6f751f7756825846685df5151cec4c8217f998464fc" Namespace="calico-apiserver" Pod="calico-apiserver-6469486c9-qrngm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6469486c9--qrngm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6469486c9--qrngm-eth0", GenerateName:"calico-apiserver-6469486c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"fd8c84c1-3db5-46bc-b232-d92330035bbc", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 28, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6469486c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6469486c9-qrngm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid596e1bd94e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:28:53.625100 containerd[1632]: 2026-01-23 18:28:52.966 [INFO][4816] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="82414d03cf2c73724d1bf6f751f7756825846685df5151cec4c8217f998464fc" Namespace="calico-apiserver" Pod="calico-apiserver-6469486c9-qrngm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6469486c9--qrngm-eth0" Jan 23 18:28:53.625100 containerd[1632]: 2026-01-23 18:28:52.966 [INFO][4816] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid596e1bd94e ContainerID="82414d03cf2c73724d1bf6f751f7756825846685df5151cec4c8217f998464fc" Namespace="calico-apiserver" Pod="calico-apiserver-6469486c9-qrngm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6469486c9--qrngm-eth0" Jan 23 18:28:53.625100 containerd[1632]: 2026-01-23 18:28:53.198 [INFO][4816] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82414d03cf2c73724d1bf6f751f7756825846685df5151cec4c8217f998464fc" Namespace="calico-apiserver" Pod="calico-apiserver-6469486c9-qrngm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6469486c9--qrngm-eth0" Jan 23 18:28:53.625100 containerd[1632]: 2026-01-23 18:28:53.200 [INFO][4816] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82414d03cf2c73724d1bf6f751f7756825846685df5151cec4c8217f998464fc" Namespace="calico-apiserver" Pod="calico-apiserver-6469486c9-qrngm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6469486c9--qrngm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6469486c9--qrngm-eth0", GenerateName:"calico-apiserver-6469486c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"fd8c84c1-3db5-46bc-b232-d92330035bbc", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 28, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6469486c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82414d03cf2c73724d1bf6f751f7756825846685df5151cec4c8217f998464fc", Pod:"calico-apiserver-6469486c9-qrngm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid596e1bd94e", MAC:"92:e1:45:73:33:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:28:53.625100 containerd[1632]: 2026-01-23 18:28:53.597 [INFO][4816] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82414d03cf2c73724d1bf6f751f7756825846685df5151cec4c8217f998464fc" Namespace="calico-apiserver" Pod="calico-apiserver-6469486c9-qrngm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6469486c9--qrngm-eth0" Jan 23 18:28:53.626530 kubelet[2842]: E0123 18:28:53.626296 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f6cd769bc-pxzdx" podUID="7bfb42fc-77fc-4491-a374-12534b8ba3b1" Jan 23 18:28:53.632790 kubelet[2842]: E0123 18:28:53.632665 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6469486c9-vgp5c" podUID="45c79f90-5bfc-4e7b-ac61-b9e42301e7a5" Jan 23 18:28:53.634064 kubelet[2842]: E0123 18:28:53.633872 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fbkc5" podUID="420164f1-10e4-4309-843a-9bf4c7513aff" Jan 23 18:28:53.890521 kubelet[2842]: I0123 18:28:53.888842 2842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-x2dz4" podStartSLOduration=55.888824233 podStartE2EDuration="55.888824233s" podCreationTimestamp="2026-01-23 18:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:28:53.642512958 +0000 UTC m=+59.603795940" watchObservedRunningTime="2026-01-23 18:28:53.888824233 +0000 UTC m=+59.850107216" Jan 23 18:28:53.902514 containerd[1632]: time="2026-01-23T18:28:53.902465897Z" level=info msg="connecting to shim 82414d03cf2c73724d1bf6f751f7756825846685df5151cec4c8217f998464fc" address="unix:///run/containerd/s/33039271058cf1faf0ed34b0297e583b63f1534c1c6c85307cdc1870d1bbc451" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:28:53.912306 kubelet[2842]: I0123 18:28:53.912177 2842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4j94s" podStartSLOduration=55.912140686 podStartE2EDuration="55.912140686s" podCreationTimestamp="2026-01-23 18:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:28:53.911114332 +0000 UTC m=+59.872397315" watchObservedRunningTime="2026-01-23 18:28:53.912140686 +0000 UTC m=+59.873423669" Jan 23 18:28:53.927000 audit[4976]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=4976 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:53.927000 audit[4976]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff5236b260 a2=0 a3=7fff5236b24c items=0 ppid=3005 pid=4976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:53.927000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:53.937000 audit[4976]: NETFILTER_CFG table=nat:129 family=2 entries=14 op=nft_register_rule pid=4976 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:53.937000 audit[4976]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff5236b260 a2=0 a3=0 items=0 ppid=3005 pid=4976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:53.937000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:54.002687 systemd[1]: Started cri-containerd-82414d03cf2c73724d1bf6f751f7756825846685df5151cec4c8217f998464fc.scope - libcontainer container 82414d03cf2c73724d1bf6f751f7756825846685df5151cec4c8217f998464fc. Jan 23 18:28:54.008000 audit[5005]: NETFILTER_CFG table=filter:130 family=2 entries=57 op=nft_register_chain pid=5005 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:28:54.008000 audit[5005]: SYSCALL arch=c000003e syscall=46 success=yes exit=27812 a0=3 a1=7ffcac170930 a2=0 a3=7ffcac17091c items=0 ppid=4204 pid=5005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:54.008000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:28:54.070000 audit[5018]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=5018 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:54.070000 audit[5018]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcab619610 a2=0 a3=7ffcab6195fc items=0 ppid=3005 pid=5018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:54.070000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:54.078000 audit[5018]: NETFILTER_CFG table=nat:132 family=2 entries=14 op=nft_register_rule pid=5018 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:54.078000 audit[5018]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcab619610 a2=0 a3=0 items=0 ppid=3005 pid=5018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:54.078000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:54.081000 audit: BPF prog-id=256 op=LOAD Jan 23 18:28:54.082000 audit: BPF prog-id=257 op=LOAD Jan 23 18:28:54.082000 audit[4988]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=4975 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:54.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832343134643033636632633733373234643162663666373531663737 Jan 23 18:28:54.083000 audit: BPF prog-id=257 op=UNLOAD Jan 23 18:28:54.083000 audit[4988]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4975 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:54.083000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832343134643033636632633733373234643162663666373531663737 Jan 23 18:28:54.084000 audit: BPF prog-id=258 op=LOAD Jan 23 18:28:54.084000 audit[4988]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4975 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:54.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832343134643033636632633733373234643162663666373531663737 Jan 23 18:28:54.086000 audit: BPF prog-id=259 op=LOAD Jan 23 18:28:54.086000 audit[4988]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4975 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:54.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832343134643033636632633733373234643162663666373531663737 Jan 23 18:28:54.086000 audit: BPF prog-id=259 op=UNLOAD Jan 23 18:28:54.086000 audit[4988]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4975 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:54.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832343134643033636632633733373234643162663666373531663737 Jan 23 18:28:54.086000 audit: BPF prog-id=258 op=UNLOAD Jan 23 18:28:54.086000 audit[4988]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4975 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:54.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832343134643033636632633733373234643162663666373531663737 Jan 23 18:28:54.086000 audit: BPF prog-id=260 op=LOAD Jan 23 18:28:54.086000 audit[4988]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4975 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:54.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832343134643033636632633733373234643162663666373531663737 Jan 23 18:28:54.095113 systemd-resolved[1288]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:28:54.137062 systemd-networkd[1517]: cali92a500cf653: Gained IPv6LL Jan 23 18:28:54.164492 systemd-networkd[1517]: cali08ec82bec0a: Link UP Jan 23 18:28:54.168583 systemd-networkd[1517]: cali08ec82bec0a: Gained carrier Jan 23 18:28:54.215723 containerd[1632]: 2026-01-23 18:28:53.915 [INFO][4945] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--qrzq4-eth0 goldmane-666569f655- calico-system f4756753-32cd-49e4-a9ac-3b64d97f5679 889 0 2026-01-23 18:28:25 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-qrzq4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali08ec82bec0a [] [] }} ContainerID="4c32ad6c8cb0f3e296097ff4b50942fcc8e27abbf17eb9086eb4ae395d17ee0b" Namespace="calico-system" Pod="goldmane-666569f655-qrzq4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--qrzq4-" Jan 23 18:28:54.215723 containerd[1632]: 2026-01-23 18:28:53.915 [INFO][4945] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c32ad6c8cb0f3e296097ff4b50942fcc8e27abbf17eb9086eb4ae395d17ee0b" Namespace="calico-system" Pod="goldmane-666569f655-qrzq4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--qrzq4-eth0" Jan 23 18:28:54.215723 containerd[1632]: 2026-01-23 18:28:54.030 [INFO][4990] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c32ad6c8cb0f3e296097ff4b50942fcc8e27abbf17eb9086eb4ae395d17ee0b" HandleID="k8s-pod-network.4c32ad6c8cb0f3e296097ff4b50942fcc8e27abbf17eb9086eb4ae395d17ee0b" Workload="localhost-k8s-goldmane--666569f655--qrzq4-eth0" Jan 23 18:28:54.215723 containerd[1632]: 2026-01-23 18:28:54.032 [INFO][4990] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4c32ad6c8cb0f3e296097ff4b50942fcc8e27abbf17eb9086eb4ae395d17ee0b" HandleID="k8s-pod-network.4c32ad6c8cb0f3e296097ff4b50942fcc8e27abbf17eb9086eb4ae395d17ee0b" Workload="localhost-k8s-goldmane--666569f655--qrzq4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000424500), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-qrzq4", "timestamp":"2026-01-23 18:28:54.030201453 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:28:54.215723 containerd[1632]: 2026-01-23 18:28:54.032 [INFO][4990] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:28:54.215723 containerd[1632]: 2026-01-23 18:28:54.032 [INFO][4990] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:28:54.215723 containerd[1632]: 2026-01-23 18:28:54.032 [INFO][4990] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:28:54.215723 containerd[1632]: 2026-01-23 18:28:54.064 [INFO][4990] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c32ad6c8cb0f3e296097ff4b50942fcc8e27abbf17eb9086eb4ae395d17ee0b" host="localhost" Jan 23 18:28:54.215723 containerd[1632]: 2026-01-23 18:28:54.076 [INFO][4990] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:28:54.215723 containerd[1632]: 2026-01-23 18:28:54.085 [INFO][4990] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:28:54.215723 containerd[1632]: 2026-01-23 18:28:54.092 [INFO][4990] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:28:54.215723 containerd[1632]: 2026-01-23 18:28:54.097 [INFO][4990] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:28:54.215723 containerd[1632]: 2026-01-23 18:28:54.098 [INFO][4990] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4c32ad6c8cb0f3e296097ff4b50942fcc8e27abbf17eb9086eb4ae395d17ee0b" host="localhost" Jan 23 18:28:54.215723 containerd[1632]: 2026-01-23 18:28:54.101 [INFO][4990] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4c32ad6c8cb0f3e296097ff4b50942fcc8e27abbf17eb9086eb4ae395d17ee0b Jan 23 18:28:54.215723 containerd[1632]: 2026-01-23 18:28:54.108 [INFO][4990] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4c32ad6c8cb0f3e296097ff4b50942fcc8e27abbf17eb9086eb4ae395d17ee0b" host="localhost" Jan 23 18:28:54.215723 containerd[1632]: 2026-01-23 18:28:54.119 [INFO][4990] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.4c32ad6c8cb0f3e296097ff4b50942fcc8e27abbf17eb9086eb4ae395d17ee0b" host="localhost" Jan 23 18:28:54.215723 containerd[1632]: 2026-01-23 18:28:54.119 [INFO][4990] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.4c32ad6c8cb0f3e296097ff4b50942fcc8e27abbf17eb9086eb4ae395d17ee0b" host="localhost" Jan 23 18:28:54.215723 containerd[1632]: 2026-01-23 18:28:54.119 [INFO][4990] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:28:54.215723 containerd[1632]: 2026-01-23 18:28:54.119 [INFO][4990] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="4c32ad6c8cb0f3e296097ff4b50942fcc8e27abbf17eb9086eb4ae395d17ee0b" HandleID="k8s-pod-network.4c32ad6c8cb0f3e296097ff4b50942fcc8e27abbf17eb9086eb4ae395d17ee0b" Workload="localhost-k8s-goldmane--666569f655--qrzq4-eth0" Jan 23 18:28:54.216695 containerd[1632]: 2026-01-23 18:28:54.124 [INFO][4945] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c32ad6c8cb0f3e296097ff4b50942fcc8e27abbf17eb9086eb4ae395d17ee0b" Namespace="calico-system" Pod="goldmane-666569f655-qrzq4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--qrzq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--qrzq4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f4756753-32cd-49e4-a9ac-3b64d97f5679", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 28, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-qrzq4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali08ec82bec0a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:28:54.216695 containerd[1632]: 2026-01-23 18:28:54.124 [INFO][4945] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="4c32ad6c8cb0f3e296097ff4b50942fcc8e27abbf17eb9086eb4ae395d17ee0b" Namespace="calico-system" Pod="goldmane-666569f655-qrzq4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--qrzq4-eth0" Jan 23 18:28:54.216695 containerd[1632]: 2026-01-23 18:28:54.124 [INFO][4945] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08ec82bec0a ContainerID="4c32ad6c8cb0f3e296097ff4b50942fcc8e27abbf17eb9086eb4ae395d17ee0b" Namespace="calico-system" Pod="goldmane-666569f655-qrzq4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--qrzq4-eth0" Jan 23 18:28:54.216695 containerd[1632]: 2026-01-23 18:28:54.169 [INFO][4945] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c32ad6c8cb0f3e296097ff4b50942fcc8e27abbf17eb9086eb4ae395d17ee0b" Namespace="calico-system" Pod="goldmane-666569f655-qrzq4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--qrzq4-eth0" Jan 23 18:28:54.216695 containerd[1632]: 2026-01-23 18:28:54.170 [INFO][4945] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c32ad6c8cb0f3e296097ff4b50942fcc8e27abbf17eb9086eb4ae395d17ee0b" Namespace="calico-system" Pod="goldmane-666569f655-qrzq4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--qrzq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--qrzq4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f4756753-32cd-49e4-a9ac-3b64d97f5679", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 28, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c32ad6c8cb0f3e296097ff4b50942fcc8e27abbf17eb9086eb4ae395d17ee0b", Pod:"goldmane-666569f655-qrzq4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali08ec82bec0a", MAC:"e6:39:b2:32:95:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:28:54.216695 containerd[1632]: 2026-01-23 18:28:54.210 [INFO][4945] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c32ad6c8cb0f3e296097ff4b50942fcc8e27abbf17eb9086eb4ae395d17ee0b" Namespace="calico-system" Pod="goldmane-666569f655-qrzq4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--qrzq4-eth0" Jan 23 18:28:54.327082 containerd[1632]: time="2026-01-23T18:28:54.325732438Z" level=info msg="connecting to shim 4c32ad6c8cb0f3e296097ff4b50942fcc8e27abbf17eb9086eb4ae395d17ee0b" address="unix:///run/containerd/s/48d72480e83e04274ce178a4edebd47c3402a3efe1fbab00abda4c73574e0dc8" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:28:54.324000 audit[5050]: NETFILTER_CFG table=filter:133 family=2 entries=68 op=nft_register_chain pid=5050 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:28:54.324000 audit[5050]: SYSCALL arch=c000003e syscall=46 success=yes exit=32292 a0=3 a1=7ffe8b320a20 a2=0 a3=7ffe8b320a0c items=0 ppid=4204 pid=5050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:54.324000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:28:54.337743 containerd[1632]: time="2026-01-23T18:28:54.337640512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6469486c9-qrngm,Uid:fd8c84c1-3db5-46bc-b232-d92330035bbc,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"82414d03cf2c73724d1bf6f751f7756825846685df5151cec4c8217f998464fc\"" Jan 23 18:28:54.341867 containerd[1632]: time="2026-01-23T18:28:54.341747192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:28:54.394845 systemd[1]: Started cri-containerd-4c32ad6c8cb0f3e296097ff4b50942fcc8e27abbf17eb9086eb4ae395d17ee0b.scope - libcontainer container 4c32ad6c8cb0f3e296097ff4b50942fcc8e27abbf17eb9086eb4ae395d17ee0b. Jan 23 18:28:54.440936 containerd[1632]: time="2026-01-23T18:28:54.440087880Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:28:54.458831 containerd[1632]: time="2026-01-23T18:28:54.451074390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:28:54.456761 systemd-networkd[1517]: calid596e1bd94e: Gained IPv6LL Jan 23 18:28:54.463857 containerd[1632]: time="2026-01-23T18:28:54.458054557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:28:54.463918 kubelet[2842]: E0123 18:28:54.462660 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:28:54.463918 kubelet[2842]: E0123 18:28:54.463116 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:28:54.466468 kubelet[2842]: E0123 18:28:54.466337 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jztrv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6469486c9-qrngm_calico-apiserver(fd8c84c1-3db5-46bc-b232-d92330035bbc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:28:54.470646 kubelet[2842]: E0123 18:28:54.470317 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6469486c9-qrngm" podUID="fd8c84c1-3db5-46bc-b232-d92330035bbc" Jan 23 18:28:54.514000 audit: BPF prog-id=261 op=LOAD Jan 23 18:28:54.519000 audit: BPF prog-id=262 op=LOAD Jan 23 18:28:54.519000 audit[5056]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=5045 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:54.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463333261643663386362306633653239363039376666346235303934 Jan 23 18:28:54.519000 audit: BPF prog-id=262 op=UNLOAD Jan 23 18:28:54.519000 audit[5056]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5045 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:54.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463333261643663386362306633653239363039376666346235303934 Jan 23 18:28:54.520000 audit: BPF prog-id=263 op=LOAD Jan 23 18:28:54.520000 audit[5056]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=5045 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:54.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463333261643663386362306633653239363039376666346235303934 Jan 23 18:28:54.520000 audit: BPF prog-id=264 op=LOAD Jan 23 18:28:54.520000 audit[5056]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=5045 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:54.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463333261643663386362306633653239363039376666346235303934 Jan 23 18:28:54.521000 audit: BPF prog-id=264 op=UNLOAD Jan 23 18:28:54.521000 audit[5056]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5045 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:54.521000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463333261643663386362306633653239363039376666346235303934 Jan 23 18:28:54.521000 audit: BPF prog-id=263 op=UNLOAD Jan 23 18:28:54.521000 audit[5056]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5045 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:54.521000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463333261643663386362306633653239363039376666346235303934 Jan 23 18:28:54.521000 audit: BPF prog-id=265 op=LOAD Jan 23 18:28:54.521000 audit[5056]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=5045 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:54.521000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463333261643663386362306633653239363039376666346235303934 Jan 23 18:28:54.528079 systemd-resolved[1288]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:28:54.701183 containerd[1632]: time="2026-01-23T18:28:54.701097434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qrzq4,Uid:f4756753-32cd-49e4-a9ac-3b64d97f5679,Namespace:calico-system,Attempt:0,} returns sandbox id \"4c32ad6c8cb0f3e296097ff4b50942fcc8e27abbf17eb9086eb4ae395d17ee0b\"" Jan 23 18:28:54.702667 kubelet[2842]: E0123 18:28:54.702279 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:54.702667 kubelet[2842]: E0123 18:28:54.702316 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6469486c9-qrngm" podUID="fd8c84c1-3db5-46bc-b232-d92330035bbc" Jan 23 18:28:54.702667 kubelet[2842]: E0123 18:28:54.702629 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f6cd769bc-pxzdx" podUID="7bfb42fc-77fc-4491-a374-12534b8ba3b1" Jan 23 18:28:54.703903 kubelet[2842]: E0123 18:28:54.703738 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:54.707003 containerd[1632]: time="2026-01-23T18:28:54.706922551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:28:54.709072 kubelet[2842]: E0123 18:28:54.708921 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fbkc5" podUID="420164f1-10e4-4309-843a-9bf4c7513aff" Jan 23 18:28:54.797373 containerd[1632]: time="2026-01-23T18:28:54.797037016Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:28:54.801784 containerd[1632]: time="2026-01-23T18:28:54.801647486Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:28:54.802195 containerd[1632]: time="2026-01-23T18:28:54.802015386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 18:28:54.804056 kubelet[2842]: E0123 18:28:54.803841 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:28:54.804056 kubelet[2842]: E0123 18:28:54.804021 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:28:54.804537 kubelet[2842]: E0123 18:28:54.804335 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bpvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qrzq4_calico-system(f4756753-32cd-49e4-a9ac-3b64d97f5679): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:28:54.806328 kubelet[2842]: E0123 18:28:54.806231 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qrzq4" podUID="f4756753-32cd-49e4-a9ac-3b64d97f5679" Jan 23 18:28:55.023000 audit[5086]: NETFILTER_CFG table=filter:134 family=2 entries=17 op=nft_register_rule pid=5086 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:55.023000 audit[5086]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffec5ed2910 a2=0 a3=7ffec5ed28fc items=0 ppid=3005 pid=5086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:55.023000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:55.032000 audit[5086]: NETFILTER_CFG table=nat:135 family=2 entries=35 op=nft_register_chain pid=5086 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:55.032000 audit[5086]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffec5ed2910 a2=0 a3=7ffec5ed28fc items=0 ppid=3005 pid=5086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:55.032000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:55.708954 kubelet[2842]: E0123 18:28:55.708585 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:55.711349 kubelet[2842]: E0123 18:28:55.711005 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:28:55.711349 kubelet[2842]: E0123 18:28:55.711070 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qrzq4" podUID="f4756753-32cd-49e4-a9ac-3b64d97f5679" Jan 23 18:28:55.711349 kubelet[2842]: E0123 18:28:55.711117 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6469486c9-qrngm" podUID="fd8c84c1-3db5-46bc-b232-d92330035bbc" Jan 23 18:28:56.157000 audit[5090]: NETFILTER_CFG table=filter:136 family=2 entries=14 op=nft_register_rule pid=5090 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:56.157000 audit[5090]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffd0c48660 a2=0 a3=7fffd0c4864c items=0 ppid=3005 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:56.157000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:56.192000 audit[5090]: NETFILTER_CFG table=nat:137 family=2 entries=56 op=nft_register_chain pid=5090 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:28:56.192000 audit[5090]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fffd0c48660 a2=0 a3=7fffd0c4864c items=0 ppid=3005 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:28:56.192000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:28:56.264248 systemd-networkd[1517]: cali08ec82bec0a: Gained IPv6LL Jan 23 18:28:56.715855 kubelet[2842]: E0123 18:28:56.715672 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qrzq4" podUID="f4756753-32cd-49e4-a9ac-3b64d97f5679" Jan 23 18:29:04.416156 containerd[1632]: time="2026-01-23T18:29:04.415691793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:29:04.494186 containerd[1632]: time="2026-01-23T18:29:04.494094055Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:29:04.496292 containerd[1632]: time="2026-01-23T18:29:04.496181537Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:29:04.496292 containerd[1632]: time="2026-01-23T18:29:04.496313333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:29:04.496904 kubelet[2842]: E0123 18:29:04.496690 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:29:04.496904 kubelet[2842]: E0123 18:29:04.496765 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:29:04.497585 kubelet[2842]: E0123 18:29:04.497025 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qldmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6469486c9-vgp5c_calico-apiserver(45c79f90-5bfc-4e7b-ac61-b9e42301e7a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:29:04.498332 kubelet[2842]: E0123 18:29:04.498151 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6469486c9-vgp5c" podUID="45c79f90-5bfc-4e7b-ac61-b9e42301e7a5" Jan 23 18:29:06.413963 kubelet[2842]: E0123 18:29:06.412715 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:29:06.419503 containerd[1632]: time="2026-01-23T18:29:06.419200781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:29:06.516131 containerd[1632]: time="2026-01-23T18:29:06.515952079Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:29:06.517880 containerd[1632]: time="2026-01-23T18:29:06.517679347Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:29:06.517880 containerd[1632]: time="2026-01-23T18:29:06.517856155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 18:29:06.518361 kubelet[2842]: E0123 18:29:06.518255 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:29:06.518361 kubelet[2842]: E0123 18:29:06.518318 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:29:06.518916 kubelet[2842]: E0123 18:29:06.518722 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2adf5ce0a43f474faed108d4fa915a26,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z8fh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cc6d476d-zc49f_calico-system(b1a5247f-3dd0-4a60-b451-df40ad40b033): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:29:06.519342 containerd[1632]: time="2026-01-23T18:29:06.519280530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:29:06.602842 containerd[1632]: time="2026-01-23T18:29:06.602597312Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:29:06.604912 containerd[1632]: time="2026-01-23T18:29:06.604662637Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:29:06.604912 containerd[1632]: time="2026-01-23T18:29:06.604771800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 18:29:06.605209 kubelet[2842]: E0123 18:29:06.605028 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:29:06.605209 kubelet[2842]: E0123 18:29:06.605098 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:29:06.605509 kubelet[2842]: E0123 18:29:06.605350 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sbrtc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f89f6994b-gxllw_calico-system(720b9cd4-1750-46fd-95a5-f9417f9523f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:29:06.607274 kubelet[2842]: E0123 18:29:06.606902 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f89f6994b-gxllw" podUID="720b9cd4-1750-46fd-95a5-f9417f9523f5" Jan 23 18:29:06.607352 containerd[1632]: time="2026-01-23T18:29:06.607012627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:29:06.686179 containerd[1632]: time="2026-01-23T18:29:06.686028894Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:29:06.688251 containerd[1632]: time="2026-01-23T18:29:06.688102155Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:29:06.688350 containerd[1632]: time="2026-01-23T18:29:06.688261270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 18:29:06.688666 kubelet[2842]: E0123 18:29:06.688565 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:29:06.688768 kubelet[2842]: E0123 18:29:06.688666 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:29:06.689290 kubelet[2842]: E0123 18:29:06.688920 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8fh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cc6d476d-zc49f_calico-system(b1a5247f-3dd0-4a60-b451-df40ad40b033): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:29:06.690696 kubelet[2842]: E0123 18:29:06.690576 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59cc6d476d-zc49f" podUID="b1a5247f-3dd0-4a60-b451-df40ad40b033" Jan 23 18:29:07.412239 kubelet[2842]: E0123 18:29:07.412054 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:29:08.937115 systemd[1]: Started sshd@7-10.0.0.29:22-10.0.0.1:53242.service - OpenSSH per-connection server daemon (10.0.0.1:53242). Jan 23 18:29:08.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.29:22-10.0.0.1:53242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:08.972593 kernel: kauditd_printk_skb: 206 callbacks suppressed Jan 23 18:29:08.972754 kernel: audit: type=1130 audit(1769192948.935:758): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.29:22-10.0.0.1:53242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:09.121000 audit[5115]: USER_ACCT pid=5115 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:09.123853 sshd[5115]: Accepted publickey for core from 10.0.0.1 port 53242 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:29:09.126875 sshd-session[5115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:29:09.123000 audit[5115]: CRED_ACQ pid=5115 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:09.138555 systemd-logind[1586]: New session 9 of user core. Jan 23 18:29:09.150957 kernel: audit: type=1101 audit(1769192949.121:759): pid=5115 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:09.151094 kernel: audit: type=1103 audit(1769192949.123:760): pid=5115 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:09.151538 kernel: audit: type=1006 audit(1769192949.123:761): pid=5115 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 23 18:29:09.123000 audit[5115]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0b14d460 a2=3 a3=0 items=0 ppid=1 pid=5115 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:29:09.183737 kernel: audit: type=1300 audit(1769192949.123:761): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0b14d460 a2=3 a3=0 items=0 ppid=1 pid=5115 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:29:09.183988 kernel: audit: type=1327 audit(1769192949.123:761): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:29:09.123000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:29:09.190849 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 18:29:09.196000 audit[5115]: USER_START pid=5115 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:09.199000 audit[5119]: CRED_ACQ pid=5119 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:09.235336 kernel: audit: type=1105 audit(1769192949.196:762): pid=5115 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:09.240133 kernel: audit: type=1103 audit(1769192949.199:763): pid=5119 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:09.416929 containerd[1632]: time="2026-01-23T18:29:09.416842429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:29:09.451159 sshd[5119]: Connection closed by 10.0.0.1 port 53242 Jan 23 18:29:09.452656 sshd-session[5115]: pam_unix(sshd:session): session closed for user core Jan 23 18:29:09.454000 audit[5115]: USER_END pid=5115 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:09.460596 systemd[1]: sshd@7-10.0.0.29:22-10.0.0.1:53242.service: Deactivated successfully. Jan 23 18:29:09.464100 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 18:29:09.466559 systemd-logind[1586]: Session 9 logged out. Waiting for processes to exit. Jan 23 18:29:09.468228 systemd-logind[1586]: Removed session 9. Jan 23 18:29:09.454000 audit[5115]: CRED_DISP pid=5115 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:09.486341 kernel: audit: type=1106 audit(1769192949.454:764): pid=5115 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:09.486522 kernel: audit: type=1104 audit(1769192949.454:765): pid=5115 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:09.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.29:22-10.0.0.1:53242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:09.508137 containerd[1632]: time="2026-01-23T18:29:09.508041992Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:29:09.510451 containerd[1632]: time="2026-01-23T18:29:09.510198580Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:29:09.510451 containerd[1632]: time="2026-01-23T18:29:09.510236159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 18:29:09.511010 kubelet[2842]: E0123 18:29:09.510867 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:29:09.511010 kubelet[2842]: E0123 18:29:09.510975 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:29:09.512661 containerd[1632]: time="2026-01-23T18:29:09.511843379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:29:09.512742 kubelet[2842]: E0123 18:29:09.512365 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-454cg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fbkc5_calico-system(420164f1-10e4-4309-843a-9bf4c7513aff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:29:09.614278 containerd[1632]: time="2026-01-23T18:29:09.613903954Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:29:09.618056 containerd[1632]: time="2026-01-23T18:29:09.617636027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:29:09.618283 containerd[1632]: time="2026-01-23T18:29:09.618100202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:29:09.618505 kubelet[2842]: E0123 18:29:09.618181 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:29:09.618745 kubelet[2842]: E0123 18:29:09.618586 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:29:09.619371 containerd[1632]: time="2026-01-23T18:29:09.619314376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:29:09.621119 kubelet[2842]: E0123 18:29:09.620372 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g8s7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f6cd769bc-pxzdx_calico-apiserver(7bfb42fc-77fc-4491-a374-12534b8ba3b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:29:09.623220 kubelet[2842]: E0123 18:29:09.623069 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f6cd769bc-pxzdx" podUID="7bfb42fc-77fc-4491-a374-12534b8ba3b1" Jan 23 18:29:09.720955 containerd[1632]: time="2026-01-23T18:29:09.720129981Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:29:09.724013 containerd[1632]: time="2026-01-23T18:29:09.723848963Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:29:09.724013 containerd[1632]: time="2026-01-23T18:29:09.723934933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 18:29:09.724478 kubelet[2842]: E0123 18:29:09.724259 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:29:09.724815 kubelet[2842]: E0123 18:29:09.724378 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:29:09.725205 kubelet[2842]: E0123 18:29:09.725151 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-454cg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fbkc5_calico-system(420164f1-10e4-4309-843a-9bf4c7513aff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:29:09.726949 kubelet[2842]: E0123 18:29:09.726747 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fbkc5" podUID="420164f1-10e4-4309-843a-9bf4c7513aff" Jan 23 18:29:11.415113 containerd[1632]: time="2026-01-23T18:29:11.414609956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:29:11.512813 containerd[1632]: time="2026-01-23T18:29:11.512611633Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:29:11.515363 containerd[1632]: time="2026-01-23T18:29:11.515155074Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:29:11.515647 containerd[1632]: time="2026-01-23T18:29:11.515251197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 18:29:11.516003 kubelet[2842]: E0123 18:29:11.515876 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:29:11.516003 kubelet[2842]: E0123 18:29:11.515997 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:29:11.516663 kubelet[2842]: E0123 18:29:11.516524 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bpvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qrzq4_calico-system(f4756753-32cd-49e4-a9ac-3b64d97f5679): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:29:11.516911 containerd[1632]: time="2026-01-23T18:29:11.516546255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:29:11.518722 kubelet[2842]: E0123 18:29:11.518369 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qrzq4" podUID="f4756753-32cd-49e4-a9ac-3b64d97f5679" Jan 23 18:29:11.597279 containerd[1632]: time="2026-01-23T18:29:11.597024720Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:29:11.599706 containerd[1632]: time="2026-01-23T18:29:11.599592506Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:29:11.599706 containerd[1632]: time="2026-01-23T18:29:11.599680301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:29:11.600374 kubelet[2842]: E0123 18:29:11.600281 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:29:11.600540 kubelet[2842]: E0123 18:29:11.600365 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:29:11.600845 kubelet[2842]: E0123 18:29:11.600667 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jztrv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6469486c9-qrngm_calico-apiserver(fd8c84c1-3db5-46bc-b232-d92330035bbc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:29:11.602392 kubelet[2842]: E0123 18:29:11.602235 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6469486c9-qrngm" podUID="fd8c84c1-3db5-46bc-b232-d92330035bbc" Jan 23 18:29:13.413809 kubelet[2842]: E0123 18:29:13.412687 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:29:14.483375 systemd[1]: Started sshd@8-10.0.0.29:22-10.0.0.1:40036.service - OpenSSH per-connection server daemon (10.0.0.1:40036). Jan 23 18:29:14.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.29:22-10.0.0.1:40036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:14.488561 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:29:14.488639 kernel: audit: type=1130 audit(1769192954.482:767): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.29:22-10.0.0.1:40036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:14.576000 audit[5142]: USER_ACCT pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:14.577277 sshd[5142]: Accepted publickey for core from 10.0.0.1 port 40036 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:29:14.580913 sshd-session[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:29:14.592546 kernel: audit: type=1101 audit(1769192954.576:768): pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:14.592669 kernel: audit: type=1103 audit(1769192954.578:769): pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:14.578000 audit[5142]: CRED_ACQ pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:14.592318 systemd-logind[1586]: New session 10 of user core. Jan 23 18:29:14.615542 kernel: audit: type=1006 audit(1769192954.578:770): pid=5142 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 23 18:29:14.615813 kernel: audit: type=1300 audit(1769192954.578:770): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6b5e6730 a2=3 a3=0 items=0 ppid=1 pid=5142 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:29:14.578000 audit[5142]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6b5e6730 a2=3 a3=0 items=0 ppid=1 pid=5142 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:29:14.631184 kernel: audit: type=1327 audit(1769192954.578:770): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:29:14.578000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:29:14.638992 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 18:29:14.645000 audit[5142]: USER_START pid=5142 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:14.651000 audit[5146]: CRED_ACQ pid=5146 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:14.689026 kernel: audit: type=1105 audit(1769192954.645:771): pid=5142 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:14.689179 kernel: audit: type=1103 audit(1769192954.651:772): pid=5146 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:14.823557 sshd[5146]: Connection closed by 10.0.0.1 port 40036 Jan 23 18:29:14.825158 sshd-session[5142]: pam_unix(sshd:session): session closed for user core Jan 23 18:29:14.829000 audit[5142]: USER_END pid=5142 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:14.833896 systemd[1]: sshd@8-10.0.0.29:22-10.0.0.1:40036.service: Deactivated successfully. Jan 23 18:29:14.838942 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 18:29:14.843164 systemd-logind[1586]: Session 10 logged out. Waiting for processes to exit. Jan 23 18:29:14.845482 systemd-logind[1586]: Removed session 10. Jan 23 18:29:14.829000 audit[5142]: CRED_DISP pid=5142 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:14.868549 kernel: audit: type=1106 audit(1769192954.829:773): pid=5142 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:14.868705 kernel: audit: type=1104 audit(1769192954.829:774): pid=5142 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:14.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.29:22-10.0.0.1:40036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:17.414175 kubelet[2842]: E0123 18:29:17.414058 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f89f6994b-gxllw" podUID="720b9cd4-1750-46fd-95a5-f9417f9523f5" Jan 23 18:29:19.977634 kubelet[2842]: E0123 18:29:19.975576 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6469486c9-vgp5c" podUID="45c79f90-5bfc-4e7b-ac61-b9e42301e7a5" Jan 23 18:29:20.038606 systemd[1]: Started sshd@9-10.0.0.29:22-10.0.0.1:40050.service - OpenSSH per-connection server daemon (10.0.0.1:40050). Jan 23 18:29:20.047335 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:29:20.047681 kernel: audit: type=1130 audit(1769192960.040:776): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.29:22-10.0.0.1:40050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:20.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.29:22-10.0.0.1:40050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:20.586000 audit[5177]: USER_ACCT pid=5177 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:20.596002 sshd-session[5177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:29:20.605988 kernel: audit: type=1101 audit(1769192960.586:777): pid=5177 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:20.606065 sshd[5177]: Accepted publickey for core from 10.0.0.1 port 40050 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:29:20.606482 kubelet[2842]: E0123 18:29:20.586159 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59cc6d476d-zc49f" podUID="b1a5247f-3dd0-4a60-b451-df40ad40b033" Jan 23 18:29:20.591000 audit[5177]: CRED_ACQ pid=5177 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:20.613592 systemd-logind[1586]: New session 11 of user core. Jan 23 18:29:20.626556 kernel: audit: type=1103 audit(1769192960.591:778): pid=5177 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:20.628013 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 18:29:20.638515 kernel: audit: type=1006 audit(1769192960.591:779): pid=5177 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 23 18:29:20.638632 kernel: audit: type=1300 audit(1769192960.591:779): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe490c87c0 a2=3 a3=0 items=0 ppid=1 pid=5177 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:29:20.591000 audit[5177]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe490c87c0 a2=3 a3=0 items=0 ppid=1 pid=5177 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:29:20.591000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:29:20.680699 kernel: audit: type=1327 audit(1769192960.591:779): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:29:20.638000 audit[5177]: USER_START pid=5177 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:20.699987 kernel: audit: type=1105 audit(1769192960.638:780): pid=5177 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:20.700756 kernel: audit: type=1103 audit(1769192960.661:781): pid=5192 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:20.661000 audit[5192]: CRED_ACQ pid=5192 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:20.820499 kubelet[2842]: E0123 18:29:20.819913 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:29:20.872516 sshd[5192]: Connection closed by 10.0.0.1 port 40050 Jan 23 18:29:20.873069 sshd-session[5177]: pam_unix(sshd:session): session closed for user core Jan 23 18:29:20.876000 audit[5177]: USER_END pid=5177 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:20.882697 systemd[1]: sshd@9-10.0.0.29:22-10.0.0.1:40050.service: Deactivated successfully. Jan 23 18:29:20.887366 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 18:29:20.889673 systemd-logind[1586]: Session 11 logged out. Waiting for processes to exit. Jan 23 18:29:20.893151 systemd-logind[1586]: Removed session 11. Jan 23 18:29:20.876000 audit[5177]: CRED_DISP pid=5177 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:20.911524 kernel: audit: type=1106 audit(1769192960.876:782): pid=5177 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:20.911831 kernel: audit: type=1104 audit(1769192960.876:783): pid=5177 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:20.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.29:22-10.0.0.1:40050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:21.416542 kubelet[2842]: E0123 18:29:21.415895 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f6cd769bc-pxzdx" podUID="7bfb42fc-77fc-4491-a374-12534b8ba3b1" Jan 23 18:29:22.421785 kubelet[2842]: E0123 18:29:22.421666 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fbkc5" podUID="420164f1-10e4-4309-843a-9bf4c7513aff" Jan 23 18:29:25.893137 systemd[1]: Started sshd@10-10.0.0.29:22-10.0.0.1:38078.service - OpenSSH per-connection server daemon (10.0.0.1:38078). Jan 23 18:29:25.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.29:22-10.0.0.1:38078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:25.897662 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:29:25.897845 kernel: audit: type=1130 audit(1769192965.892:785): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.29:22-10.0.0.1:38078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:26.001000 audit[5212]: USER_ACCT pid=5212 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:26.002022 sshd[5212]: Accepted publickey for core from 10.0.0.1 port 38078 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:29:26.006145 sshd-session[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:29:26.003000 audit[5212]: CRED_ACQ pid=5212 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:26.016032 systemd-logind[1586]: New session 12 of user core. Jan 23 18:29:26.030045 kernel: audit: type=1101 audit(1769192966.001:786): pid=5212 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:26.030162 kernel: audit: type=1103 audit(1769192966.003:787): pid=5212 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:26.030205 kernel: audit: type=1006 audit(1769192966.003:788): pid=5212 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 23 18:29:26.038543 kernel: audit: type=1300 audit(1769192966.003:788): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe45d68840 a2=3 a3=0 items=0 ppid=1 pid=5212 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:29:26.003000 audit[5212]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe45d68840 a2=3 a3=0 items=0 ppid=1 pid=5212 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:29:26.003000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:29:26.069946 kernel: audit: type=1327 audit(1769192966.003:788): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:29:26.074118 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 18:29:26.080000 audit[5212]: USER_START pid=5212 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:26.085000 audit[5216]: CRED_ACQ pid=5216 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:26.116054 kernel: audit: type=1105 audit(1769192966.080:789): pid=5212 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:26.116201 kernel: audit: type=1103 audit(1769192966.085:790): pid=5216 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:26.287585 sshd[5216]: Connection closed by 10.0.0.1 port 38078 Jan 23 18:29:26.287982 sshd-session[5212]: pam_unix(sshd:session): session closed for user core Jan 23 18:29:26.290000 audit[5212]: USER_END pid=5212 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:26.296915 systemd-logind[1586]: Session 12 logged out. Waiting for processes to exit. Jan 23 18:29:26.297293 systemd[1]: sshd@10-10.0.0.29:22-10.0.0.1:38078.service: Deactivated successfully. Jan 23 18:29:26.303696 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 18:29:26.308583 systemd-logind[1586]: Removed session 12. Jan 23 18:29:26.290000 audit[5212]: CRED_DISP pid=5212 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:26.333977 kernel: audit: type=1106 audit(1769192966.290:791): pid=5212 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:26.334085 kernel: audit: type=1104 audit(1769192966.290:792): pid=5212 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:26.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.29:22-10.0.0.1:38078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:26.417109 kubelet[2842]: E0123 18:29:26.416815 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qrzq4" podUID="f4756753-32cd-49e4-a9ac-3b64d97f5679" Jan 23 18:29:26.418065 kubelet[2842]: E0123 18:29:26.417626 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6469486c9-qrngm" podUID="fd8c84c1-3db5-46bc-b232-d92330035bbc" Jan 23 18:29:28.414237 kubelet[2842]: E0123 18:29:28.413192 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:29:28.417322 containerd[1632]: time="2026-01-23T18:29:28.417039960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:29:28.488339 containerd[1632]: time="2026-01-23T18:29:28.488164022Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:29:28.490056 containerd[1632]: time="2026-01-23T18:29:28.489958309Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:29:28.490056 containerd[1632]: time="2026-01-23T18:29:28.490040773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 18:29:28.490647 kubelet[2842]: E0123 18:29:28.490478 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:29:28.490647 kubelet[2842]: E0123 18:29:28.490640 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:29:28.490923 kubelet[2842]: E0123 18:29:28.490842 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sbrtc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f89f6994b-gxllw_calico-system(720b9cd4-1750-46fd-95a5-f9417f9523f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:29:28.492280 kubelet[2842]: E0123 18:29:28.492210 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f89f6994b-gxllw" podUID="720b9cd4-1750-46fd-95a5-f9417f9523f5" Jan 23 18:29:31.310921 systemd[1]: Started sshd@11-10.0.0.29:22-10.0.0.1:38094.service - OpenSSH per-connection server daemon (10.0.0.1:38094). Jan 23 18:29:31.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.29:22-10.0.0.1:38094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:31.313542 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:29:31.313624 kernel: audit: type=1130 audit(1769192971.310:794): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.29:22-10.0.0.1:38094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:31.423000 audit[5232]: USER_ACCT pid=5232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:31.424275 sshd[5232]: Accepted publickey for core from 10.0.0.1 port 38094 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:29:31.426788 sshd-session[5232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:29:31.434064 systemd-logind[1586]: New session 13 of user core. Jan 23 18:29:31.424000 audit[5232]: CRED_ACQ pid=5232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:31.447076 kernel: audit: type=1101 audit(1769192971.423:795): pid=5232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:31.447159 kernel: audit: type=1103 audit(1769192971.424:796): pid=5232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:31.447205 kernel: audit: type=1006 audit(1769192971.424:797): pid=5232 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 23 18:29:31.424000 audit[5232]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4e4e5990 a2=3 a3=0 items=0 ppid=1 pid=5232 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:29:31.465352 kernel: audit: type=1300 audit(1769192971.424:797): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4e4e5990 a2=3 a3=0 items=0 ppid=1 pid=5232 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:29:31.465464 kernel: audit: type=1327 audit(1769192971.424:797): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:29:31.424000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:29:31.470845 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 18:29:31.474000 audit[5232]: USER_START pid=5232 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:31.477000 audit[5236]: CRED_ACQ pid=5236 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:31.500069 kernel: audit: type=1105 audit(1769192971.474:798): pid=5232 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:31.500140 kernel: audit: type=1103 audit(1769192971.477:799): pid=5236 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:31.606842 sshd[5236]: Connection closed by 10.0.0.1 port 38094 Jan 23 18:29:31.607193 sshd-session[5232]: pam_unix(sshd:session): session closed for user core Jan 23 18:29:31.609000 audit[5232]: USER_END pid=5232 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:31.616047 systemd[1]: sshd@11-10.0.0.29:22-10.0.0.1:38094.service: Deactivated successfully. Jan 23 18:29:31.619649 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 18:29:31.612000 audit[5232]: CRED_DISP pid=5232 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:31.622557 systemd-logind[1586]: Session 13 logged out. Waiting for processes to exit. Jan 23 18:29:31.625058 systemd-logind[1586]: Removed session 13. Jan 23 18:29:31.629884 kernel: audit: type=1106 audit(1769192971.609:800): pid=5232 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:31.629980 kernel: audit: type=1104 audit(1769192971.612:801): pid=5232 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:31.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.29:22-10.0.0.1:38094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:33.414621 containerd[1632]: time="2026-01-23T18:29:33.414554053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:29:33.491138 containerd[1632]: time="2026-01-23T18:29:33.490964411Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:29:33.493237 containerd[1632]: time="2026-01-23T18:29:33.493063288Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:29:33.493237 containerd[1632]: time="2026-01-23T18:29:33.493207277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:29:33.494351 kubelet[2842]: E0123 18:29:33.493563 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:29:33.494351 kubelet[2842]: E0123 18:29:33.493631 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:29:33.494351 kubelet[2842]: E0123 18:29:33.493864 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g8s7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f6cd769bc-pxzdx_calico-apiserver(7bfb42fc-77fc-4491-a374-12534b8ba3b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:29:33.495557 kubelet[2842]: E0123 18:29:33.495361 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f6cd769bc-pxzdx" podUID="7bfb42fc-77fc-4491-a374-12534b8ba3b1" Jan 23 18:29:34.413833 containerd[1632]: time="2026-01-23T18:29:34.413358895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:29:34.474444 containerd[1632]: time="2026-01-23T18:29:34.474293121Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:29:34.476022 containerd[1632]: time="2026-01-23T18:29:34.475965447Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:29:34.476172 containerd[1632]: time="2026-01-23T18:29:34.476068009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 18:29:34.476351 kubelet[2842]: E0123 18:29:34.476276 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:29:34.476495 kubelet[2842]: E0123 18:29:34.476367 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:29:34.476997 kubelet[2842]: E0123 18:29:34.476824 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2adf5ce0a43f474faed108d4fa915a26,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z8fh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cc6d476d-zc49f_calico-system(b1a5247f-3dd0-4a60-b451-df40ad40b033): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:29:34.477146 containerd[1632]: time="2026-01-23T18:29:34.476897002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:29:34.547345 containerd[1632]: time="2026-01-23T18:29:34.546264656Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:29:34.555162 containerd[1632]: time="2026-01-23T18:29:34.555077099Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:29:34.555227 containerd[1632]: time="2026-01-23T18:29:34.555129832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:29:34.555547 kubelet[2842]: E0123 18:29:34.555459 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:29:34.556959 kubelet[2842]: E0123 18:29:34.555546 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:29:34.556959 kubelet[2842]: E0123 18:29:34.555886 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qldmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6469486c9-vgp5c_calico-apiserver(45c79f90-5bfc-4e7b-ac61-b9e42301e7a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:29:34.557364 containerd[1632]: time="2026-01-23T18:29:34.556869075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:29:34.557592 kubelet[2842]: E0123 18:29:34.556962 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6469486c9-vgp5c" podUID="45c79f90-5bfc-4e7b-ac61-b9e42301e7a5" Jan 23 18:29:34.625976 containerd[1632]: time="2026-01-23T18:29:34.625865596Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:29:34.627676 containerd[1632]: time="2026-01-23T18:29:34.627556939Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:29:34.627676 containerd[1632]: time="2026-01-23T18:29:34.627603228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 18:29:34.627916 kubelet[2842]: E0123 18:29:34.627831 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:29:34.627916 kubelet[2842]: E0123 18:29:34.627884 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:29:34.628071 kubelet[2842]: E0123 18:29:34.628003 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8fh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cc6d476d-zc49f_calico-system(b1a5247f-3dd0-4a60-b451-df40ad40b033): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:29:34.629440 kubelet[2842]: E0123 18:29:34.629297 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59cc6d476d-zc49f" podUID="b1a5247f-3dd0-4a60-b451-df40ad40b033" Jan 23 18:29:36.414828 containerd[1632]: time="2026-01-23T18:29:36.414558152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:29:36.477783 containerd[1632]: time="2026-01-23T18:29:36.477576090Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:29:36.479452 containerd[1632]: time="2026-01-23T18:29:36.479248017Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:29:36.479452 containerd[1632]: time="2026-01-23T18:29:36.479350730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 18:29:36.479843 kubelet[2842]: E0123 18:29:36.479719 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:29:36.479843 kubelet[2842]: E0123 18:29:36.479832 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:29:36.480335 kubelet[2842]: E0123 18:29:36.480062 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-454cg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fbkc5_calico-system(420164f1-10e4-4309-843a-9bf4c7513aff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:29:36.483310 containerd[1632]: time="2026-01-23T18:29:36.483049999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:29:36.546654 containerd[1632]: time="2026-01-23T18:29:36.546540535Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:29:36.548079 containerd[1632]: time="2026-01-23T18:29:36.547994284Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:29:36.548166 containerd[1632]: time="2026-01-23T18:29:36.548081118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 18:29:36.548333 kubelet[2842]: E0123 18:29:36.548246 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:29:36.548333 kubelet[2842]: E0123 18:29:36.548299 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:29:36.548581 kubelet[2842]: E0123 18:29:36.548501 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-454cg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fbkc5_calico-system(420164f1-10e4-4309-843a-9bf4c7513aff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:29:36.550144 kubelet[2842]: E0123 18:29:36.550001 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fbkc5" podUID="420164f1-10e4-4309-843a-9bf4c7513aff" Jan 23 18:29:36.621068 systemd[1]: Started sshd@12-10.0.0.29:22-10.0.0.1:52300.service - OpenSSH per-connection server daemon (10.0.0.1:52300). Jan 23 18:29:36.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.29:22-10.0.0.1:52300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:36.626484 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:29:36.626572 kernel: audit: type=1130 audit(1769192976.620:803): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.29:22-10.0.0.1:52300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:36.711000 audit[5258]: USER_ACCT pid=5258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:36.711951 sshd[5258]: Accepted publickey for core from 10.0.0.1 port 52300 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:29:36.714602 sshd-session[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:29:36.722114 systemd-logind[1586]: New session 14 of user core. Jan 23 18:29:36.711000 audit[5258]: CRED_ACQ pid=5258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:36.736987 kernel: audit: type=1101 audit(1769192976.711:804): pid=5258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:36.737052 kernel: audit: type=1103 audit(1769192976.711:805): pid=5258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:36.737103 kernel: audit: type=1006 audit(1769192976.711:806): pid=5258 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 23 18:29:36.711000 audit[5258]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7163f9f0 a2=3 a3=0 items=0 ppid=1 pid=5258 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:29:36.759275 kernel: audit: type=1300 audit(1769192976.711:806): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7163f9f0 a2=3 a3=0 items=0 ppid=1 pid=5258 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:29:36.759361 kernel: audit: type=1327 audit(1769192976.711:806): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:29:36.711000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:29:36.768817 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 18:29:36.772000 audit[5258]: USER_START pid=5258 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:36.785469 kernel: audit: type=1105 audit(1769192976.772:807): pid=5258 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:36.785529 kernel: audit: type=1103 audit(1769192976.775:808): pid=5262 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:36.775000 audit[5262]: CRED_ACQ pid=5262 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:36.877350 sshd[5262]: Connection closed by 10.0.0.1 port 52300 Jan 23 18:29:36.877897 sshd-session[5258]: pam_unix(sshd:session): session closed for user core Jan 23 18:29:36.879000 audit[5258]: USER_END pid=5258 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:36.880000 audit[5258]: CRED_DISP pid=5258 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:36.901494 kernel: audit: type=1106 audit(1769192976.879:809): pid=5258 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:36.901682 kernel: audit: type=1104 audit(1769192976.880:810): pid=5258 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:36.907977 systemd[1]: sshd@12-10.0.0.29:22-10.0.0.1:52300.service: Deactivated successfully. Jan 23 18:29:36.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.29:22-10.0.0.1:52300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:36.911001 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 18:29:36.912375 systemd-logind[1586]: Session 14 logged out. Waiting for processes to exit. Jan 23 18:29:36.917172 systemd[1]: Started sshd@13-10.0.0.29:22-10.0.0.1:52306.service - OpenSSH per-connection server daemon (10.0.0.1:52306). Jan 23 18:29:36.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.29:22-10.0.0.1:52306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:36.918571 systemd-logind[1586]: Removed session 14. Jan 23 18:29:36.979000 audit[5276]: USER_ACCT pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:36.981161 sshd[5276]: Accepted publickey for core from 10.0.0.1 port 52306 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:29:36.981000 audit[5276]: CRED_ACQ pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:36.981000 audit[5276]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6381c200 a2=3 a3=0 items=0 ppid=1 pid=5276 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:29:36.981000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:29:36.983941 sshd-session[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:29:36.991482 systemd-logind[1586]: New session 15 of user core. Jan 23 18:29:37.001675 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 18:29:37.005000 audit[5276]: USER_START pid=5276 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:37.008000 audit[5280]: CRED_ACQ pid=5280 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:37.149958 sshd[5280]: Connection closed by 10.0.0.1 port 52306 Jan 23 18:29:37.150574 sshd-session[5276]: pam_unix(sshd:session): session closed for user core Jan 23 18:29:37.156000 audit[5276]: USER_END pid=5276 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:37.156000 audit[5276]: CRED_DISP pid=5276 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:37.162086 systemd[1]: sshd@13-10.0.0.29:22-10.0.0.1:52306.service: Deactivated successfully. Jan 23 18:29:37.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.29:22-10.0.0.1:52306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:37.165530 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 18:29:37.167728 systemd-logind[1586]: Session 15 logged out. Waiting for processes to exit. Jan 23 18:29:37.171715 systemd[1]: Started sshd@14-10.0.0.29:22-10.0.0.1:52314.service - OpenSSH per-connection server daemon (10.0.0.1:52314). Jan 23 18:29:37.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.29:22-10.0.0.1:52314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:37.172922 systemd-logind[1586]: Removed session 15. Jan 23 18:29:37.266000 audit[5291]: USER_ACCT pid=5291 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:37.266980 sshd[5291]: Accepted publickey for core from 10.0.0.1 port 52314 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:29:37.267000 audit[5291]: CRED_ACQ pid=5291 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:37.267000 audit[5291]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea184b5b0 a2=3 a3=0 items=0 ppid=1 pid=5291 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:29:37.267000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:29:37.269728 sshd-session[5291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:29:37.276727 systemd-logind[1586]: New session 16 of user core. Jan 23 18:29:37.286656 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 18:29:37.290000 audit[5291]: USER_START pid=5291 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:37.292000 audit[5295]: CRED_ACQ pid=5295 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:37.384953 sshd[5295]: Connection closed by 10.0.0.1 port 52314 Jan 23 18:29:37.385284 sshd-session[5291]: pam_unix(sshd:session): session closed for user core Jan 23 18:29:37.387000 audit[5291]: USER_END pid=5291 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:37.387000 audit[5291]: CRED_DISP pid=5291 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:37.391167 systemd[1]: sshd@14-10.0.0.29:22-10.0.0.1:52314.service: Deactivated successfully. Jan 23 18:29:37.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.29:22-10.0.0.1:52314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:37.394138 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 18:29:37.395515 systemd-logind[1586]: Session 16 logged out. Waiting for processes to exit. Jan 23 18:29:37.397863 systemd-logind[1586]: Removed session 16. Jan 23 18:29:38.415212 kubelet[2842]: E0123 18:29:38.413267 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:29:41.414992 containerd[1632]: time="2026-01-23T18:29:41.414265023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:29:41.479522 containerd[1632]: time="2026-01-23T18:29:41.479352210Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:29:41.481131 containerd[1632]: time="2026-01-23T18:29:41.481025602Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:29:41.481463 containerd[1632]: time="2026-01-23T18:29:41.481176243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 18:29:41.481754 kubelet[2842]: E0123 18:29:41.481699 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:29:41.482194 kubelet[2842]: E0123 18:29:41.481778 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:29:41.483255 kubelet[2842]: E0123 18:29:41.482314 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bpvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qrzq4_calico-system(f4756753-32cd-49e4-a9ac-3b64d97f5679): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:29:41.483485 containerd[1632]: time="2026-01-23T18:29:41.482713320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:29:41.483674 kubelet[2842]: E0123 18:29:41.483644 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qrzq4" podUID="f4756753-32cd-49e4-a9ac-3b64d97f5679" Jan 23 18:29:41.548299 containerd[1632]: time="2026-01-23T18:29:41.548082894Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:29:41.550102 containerd[1632]: time="2026-01-23T18:29:41.549983634Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:29:41.550531 kubelet[2842]: E0123 18:29:41.550323 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:29:41.550531 kubelet[2842]: E0123 18:29:41.550512 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:29:41.550865 containerd[1632]: time="2026-01-23T18:29:41.550178917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:29:41.550922 kubelet[2842]: E0123 18:29:41.550711 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jztrv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6469486c9-qrngm_calico-apiserver(fd8c84c1-3db5-46bc-b232-d92330035bbc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:29:41.552846 kubelet[2842]: E0123 18:29:41.552557 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6469486c9-qrngm" podUID="fd8c84c1-3db5-46bc-b232-d92330035bbc" Jan 23 18:29:42.456770 kubelet[2842]: E0123 18:29:42.456349 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f89f6994b-gxllw" podUID="720b9cd4-1750-46fd-95a5-f9417f9523f5" Jan 23 18:29:42.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.29:22-10.0.0.1:54172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:42.464483 systemd[1]: Started sshd@15-10.0.0.29:22-10.0.0.1:54172.service - OpenSSH per-connection server daemon (10.0.0.1:54172). Jan 23 18:29:42.467306 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 23 18:29:42.467433 kernel: audit: type=1130 audit(1769192982.463:830): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.29:22-10.0.0.1:54172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:42.655000 audit[5309]: USER_ACCT pid=5309 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:42.657463 sshd[5309]: Accepted publickey for core from 10.0.0.1 port 54172 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:29:42.661452 sshd-session[5309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:29:42.657000 audit[5309]: CRED_ACQ pid=5309 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:42.672665 systemd-logind[1586]: New session 17 of user core. Jan 23 18:29:42.681749 kernel: audit: type=1101 audit(1769192982.655:831): pid=5309 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:42.681967 kernel: audit: type=1103 audit(1769192982.657:832): pid=5309 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:42.682010 kernel: audit: type=1006 audit(1769192982.657:833): pid=5309 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 23 18:29:42.688602 kernel: audit: type=1300 audit(1769192982.657:833): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb3561490 a2=3 a3=0 items=0 ppid=1 pid=5309 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:29:42.657000 audit[5309]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb3561490 a2=3 a3=0 items=0 ppid=1 pid=5309 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:29:42.657000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:29:42.707513 kernel: audit: type=1327 audit(1769192982.657:833): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:29:42.707830 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 18:29:42.712000 audit[5309]: USER_START pid=5309 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:42.716000 audit[5313]: CRED_ACQ pid=5313 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:42.737895 kernel: audit: type=1105 audit(1769192982.712:834): pid=5309 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:42.737996 kernel: audit: type=1103 audit(1769192982.716:835): pid=5313 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:42.819981 sshd[5313]: Connection closed by 10.0.0.1 port 54172 Jan 23 18:29:42.820535 sshd-session[5309]: pam_unix(sshd:session): session closed for user core Jan 23 18:29:42.821000 audit[5309]: USER_END pid=5309 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:42.825976 systemd[1]: sshd@15-10.0.0.29:22-10.0.0.1:54172.service: Deactivated successfully. Jan 23 18:29:42.828893 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 18:29:42.830495 systemd-logind[1586]: Session 17 logged out. Waiting for processes to exit. Jan 23 18:29:42.832355 systemd-logind[1586]: Removed session 17. Jan 23 18:29:42.821000 audit[5309]: CRED_DISP pid=5309 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:42.843653 kernel: audit: type=1106 audit(1769192982.821:836): pid=5309 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:42.843738 kernel: audit: type=1104 audit(1769192982.821:837): pid=5309 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:42.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.29:22-10.0.0.1:54172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:44.415091 kubelet[2842]: E0123 18:29:44.414522 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f6cd769bc-pxzdx" podUID="7bfb42fc-77fc-4491-a374-12534b8ba3b1" Jan 23 18:29:46.430326 kubelet[2842]: E0123 18:29:46.424295 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6469486c9-vgp5c" podUID="45c79f90-5bfc-4e7b-ac61-b9e42301e7a5" Jan 23 18:29:47.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.29:22-10.0.0.1:54186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:47.837751 systemd[1]: Started sshd@16-10.0.0.29:22-10.0.0.1:54186.service - OpenSSH per-connection server daemon (10.0.0.1:54186). Jan 23 18:29:47.840355 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:29:47.840563 kernel: audit: type=1130 audit(1769192987.836:839): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.29:22-10.0.0.1:54186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:47.917000 audit[5330]: USER_ACCT pid=5330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:47.919139 sshd[5330]: Accepted publickey for core from 10.0.0.1 port 54186 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:29:47.922784 sshd-session[5330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:29:47.919000 audit[5330]: CRED_ACQ pid=5330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:47.931627 systemd-logind[1586]: New session 18 of user core. Jan 23 18:29:47.939492 kernel: audit: type=1101 audit(1769192987.917:840): pid=5330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:47.939575 kernel: audit: type=1103 audit(1769192987.919:841): pid=5330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:47.939604 kernel: audit: type=1006 audit(1769192987.919:842): pid=5330 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 23 18:29:47.919000 audit[5330]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdac6a9ee0 a2=3 a3=0 items=0 ppid=1 pid=5330 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:29:47.966589 kernel: audit: type=1300 audit(1769192987.919:842): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdac6a9ee0 a2=3 a3=0 items=0 ppid=1 pid=5330 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:29:47.966888 kernel: audit: type=1327 audit(1769192987.919:842): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:29:47.919000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:29:47.981983 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 18:29:47.985000 audit[5330]: USER_START pid=5330 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:47.989000 audit[5334]: CRED_ACQ pid=5334 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:48.009495 kernel: audit: type=1105 audit(1769192987.985:843): pid=5330 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:48.009594 kernel: audit: type=1103 audit(1769192987.989:844): pid=5334 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:48.094948 sshd[5334]: Connection closed by 10.0.0.1 port 54186 Jan 23 18:29:48.095075 sshd-session[5330]: pam_unix(sshd:session): session closed for user core Jan 23 18:29:48.097000 audit[5330]: USER_END pid=5330 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:48.101477 systemd[1]: sshd@16-10.0.0.29:22-10.0.0.1:54186.service: Deactivated successfully. Jan 23 18:29:48.105211 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 18:29:48.109301 systemd-logind[1586]: Session 18 logged out. Waiting for processes to exit. Jan 23 18:29:48.111053 systemd-logind[1586]: Removed session 18. Jan 23 18:29:48.097000 audit[5330]: CRED_DISP pid=5330 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:48.126606 kernel: audit: type=1106 audit(1769192988.097:845): pid=5330 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:48.126769 kernel: audit: type=1104 audit(1769192988.097:846): pid=5330 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:48.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.29:22-10.0.0.1:54186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:49.415238 kubelet[2842]: E0123 18:29:49.415077 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59cc6d476d-zc49f" podUID="b1a5247f-3dd0-4a60-b451-df40ad40b033" Jan 23 18:29:50.417155 kubelet[2842]: E0123 18:29:50.416652 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fbkc5" podUID="420164f1-10e4-4309-843a-9bf4c7513aff" Jan 23 18:29:53.115352 systemd[1]: Started sshd@17-10.0.0.29:22-10.0.0.1:52164.service - OpenSSH per-connection server daemon (10.0.0.1:52164). Jan 23 18:29:53.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.29:22-10.0.0.1:52164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:53.118858 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:29:53.118995 kernel: audit: type=1130 audit(1769192993.114:848): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.29:22-10.0.0.1:52164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:53.210000 audit[5374]: USER_ACCT pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:53.215606 sshd-session[5374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:29:53.220995 sshd[5374]: Accepted publickey for core from 10.0.0.1 port 52164 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:29:53.222973 systemd-logind[1586]: New session 19 of user core. Jan 23 18:29:53.212000 audit[5374]: CRED_ACQ pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:53.237484 kernel: audit: type=1101 audit(1769192993.210:849): pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:53.237579 kernel: audit: type=1103 audit(1769192993.212:850): pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:53.246580 kernel: audit: type=1006 audit(1769192993.212:851): pid=5374 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 23 18:29:53.212000 audit[5374]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc4a82c00 a2=3 a3=0 items=0 ppid=1 pid=5374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:29:53.212000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:29:53.281013 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 18:29:53.286013 kernel: audit: type=1300 audit(1769192993.212:851): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc4a82c00 a2=3 a3=0 items=0 ppid=1 pid=5374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:29:53.286066 kernel: audit: type=1327 audit(1769192993.212:851): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:29:53.286000 audit[5374]: USER_START pid=5374 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:53.301510 kernel: audit: type=1105 audit(1769192993.286:852): pid=5374 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:53.301617 kernel: audit: type=1103 audit(1769192993.289:853): pid=5378 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:53.289000 audit[5378]: CRED_ACQ pid=5378 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:53.402516 sshd[5378]: Connection closed by 10.0.0.1 port 52164 Jan 23 18:29:53.402933 sshd-session[5374]: pam_unix(sshd:session): session closed for user core Jan 23 18:29:53.404000 audit[5374]: USER_END pid=5374 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:53.408588 systemd[1]: sshd@17-10.0.0.29:22-10.0.0.1:52164.service: Deactivated successfully. Jan 23 18:29:53.411655 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 18:29:53.414463 systemd-logind[1586]: Session 19 logged out. Waiting for processes to exit. Jan 23 18:29:53.416133 systemd-logind[1586]: Removed session 19. Jan 23 18:29:53.404000 audit[5374]: CRED_DISP pid=5374 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:53.426575 kernel: audit: type=1106 audit(1769192993.404:854): pid=5374 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:53.426666 kernel: audit: type=1104 audit(1769192993.404:855): pid=5374 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:53.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.29:22-10.0.0.1:52164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:54.414130 kubelet[2842]: E0123 18:29:54.413640 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6469486c9-qrngm" podUID="fd8c84c1-3db5-46bc-b232-d92330035bbc" Jan 23 18:29:56.413162 kubelet[2842]: E0123 18:29:56.412877 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qrzq4" podUID="f4756753-32cd-49e4-a9ac-3b64d97f5679" Jan 23 18:29:56.413162 kubelet[2842]: E0123 18:29:56.412877 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f89f6994b-gxllw" podUID="720b9cd4-1750-46fd-95a5-f9417f9523f5" Jan 23 18:29:58.413325 kubelet[2842]: E0123 18:29:58.413211 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6469486c9-vgp5c" podUID="45c79f90-5bfc-4e7b-ac61-b9e42301e7a5" Jan 23 18:29:58.414213 kubelet[2842]: E0123 18:29:58.413461 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f6cd769bc-pxzdx" podUID="7bfb42fc-77fc-4491-a374-12534b8ba3b1" Jan 23 18:29:58.425915 systemd[1]: Started sshd@18-10.0.0.29:22-10.0.0.1:52172.service - OpenSSH per-connection server daemon (10.0.0.1:52172). Jan 23 18:29:58.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.29:22-10.0.0.1:52172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:58.429470 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:29:58.429536 kernel: audit: type=1130 audit(1769192998.425:857): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.29:22-10.0.0.1:52172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:29:58.512000 audit[5393]: USER_ACCT pid=5393 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:58.513786 sshd[5393]: Accepted publickey for core from 10.0.0.1 port 52172 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:29:58.516818 sshd-session[5393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:29:58.525626 systemd-logind[1586]: New session 20 of user core. Jan 23 18:29:58.513000 audit[5393]: CRED_ACQ pid=5393 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:58.540032 kernel: audit: type=1101 audit(1769192998.512:858): pid=5393 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:58.540161 kernel: audit: type=1103 audit(1769192998.513:859): pid=5393 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:58.540195 kernel: audit: type=1006 audit(1769192998.513:860): pid=5393 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 23 18:29:58.547979 kernel: audit: type=1300 audit(1769192998.513:860): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf896bfb0 a2=3 a3=0 items=0 ppid=1 pid=5393 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:29:58.513000 audit[5393]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf896bfb0 a2=3 a3=0 items=0 ppid=1 pid=5393 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:29:58.513000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:29:58.566269 kernel: audit: type=1327 audit(1769192998.513:860): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:29:58.572867 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 18:29:58.576000 audit[5393]: USER_START pid=5393 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:58.579000 audit[5397]: CRED_ACQ pid=5397 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:58.608491 kernel: audit: type=1105 audit(1769192998.576:861): pid=5393 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:58.608614 kernel: audit: type=1103 audit(1769192998.579:862): pid=5397 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:58.684322 sshd[5397]: Connection closed by 10.0.0.1 port 52172 Jan 23 18:29:58.684822 sshd-session[5393]: pam_unix(sshd:session): session closed for user core Jan 23 18:29:58.685000 audit[5393]: USER_END pid=5393 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:58.692727 systemd[1]: sshd@18-10.0.0.29:22-10.0.0.1:52172.service: Deactivated successfully. Jan 23 18:29:58.697530 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 18:29:58.700118 systemd-logind[1586]: Session 20 logged out. Waiting for processes to exit. Jan 23 18:29:58.702715 systemd-logind[1586]: Removed session 20. Jan 23 18:29:58.705515 kernel: audit: type=1106 audit(1769192998.685:863): pid=5393 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:58.705664 kernel: audit: type=1104 audit(1769192998.685:864): pid=5393 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:58.685000 audit[5393]: CRED_DISP pid=5393 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:29:58.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.29:22-10.0.0.1:52172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:30:01.414803 kubelet[2842]: E0123 18:30:01.414650 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59cc6d476d-zc49f" podUID="b1a5247f-3dd0-4a60-b451-df40ad40b033" Jan 23 18:30:02.412161 kubelet[2842]: E0123 18:30:02.412060 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:30:03.414112 kubelet[2842]: E0123 18:30:03.414048 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fbkc5" podUID="420164f1-10e4-4309-843a-9bf4c7513aff" Jan 23 18:30:03.708222 systemd[1]: Started sshd@19-10.0.0.29:22-10.0.0.1:56922.service - OpenSSH per-connection server daemon (10.0.0.1:56922). Jan 23 18:30:03.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.29:22-10.0.0.1:56922 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:30:03.710970 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:30:03.711117 kernel: audit: type=1130 audit(1769193003.706:866): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.29:22-10.0.0.1:56922 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:30:03.780000 audit[5412]: USER_ACCT pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:03.782056 sshd[5412]: Accepted publickey for core from 10.0.0.1 port 56922 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:30:03.784759 sshd-session[5412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:30:03.793285 systemd-logind[1586]: New session 21 of user core. Jan 23 18:30:03.781000 audit[5412]: CRED_ACQ pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:03.796543 kernel: audit: type=1101 audit(1769193003.780:867): pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:03.796603 kernel: audit: type=1103 audit(1769193003.781:868): pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:03.812654 kernel: audit: type=1006 audit(1769193003.781:869): pid=5412 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 23 18:30:03.812752 kernel: audit: type=1300 audit(1769193003.781:869): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefb070b60 a2=3 a3=0 items=0 ppid=1 pid=5412 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:30:03.781000 audit[5412]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefb070b60 a2=3 a3=0 items=0 ppid=1 pid=5412 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:30:03.781000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:30:03.828256 kernel: audit: type=1327 audit(1769193003.781:869): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:30:03.829748 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 18:30:03.832000 audit[5412]: USER_START pid=5412 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:03.832000 audit[5416]: CRED_ACQ pid=5416 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:03.860527 kernel: audit: type=1105 audit(1769193003.832:870): pid=5412 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:03.860632 kernel: audit: type=1103 audit(1769193003.832:871): pid=5416 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:03.937611 sshd[5416]: Connection closed by 10.0.0.1 port 56922 Jan 23 18:30:03.938162 sshd-session[5412]: pam_unix(sshd:session): session closed for user core Jan 23 18:30:03.938000 audit[5412]: USER_END pid=5412 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:03.939000 audit[5412]: CRED_DISP pid=5412 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:03.961353 kernel: audit: type=1106 audit(1769193003.938:872): pid=5412 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:03.961497 kernel: audit: type=1104 audit(1769193003.939:873): pid=5412 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:03.972271 systemd[1]: sshd@19-10.0.0.29:22-10.0.0.1:56922.service: Deactivated successfully. Jan 23 18:30:03.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.29:22-10.0.0.1:56922 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:30:03.975665 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 18:30:03.979662 systemd-logind[1586]: Session 21 logged out. Waiting for processes to exit. Jan 23 18:30:03.981520 systemd[1]: Started sshd@20-10.0.0.29:22-10.0.0.1:56928.service - OpenSSH per-connection server daemon (10.0.0.1:56928). Jan 23 18:30:03.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.29:22-10.0.0.1:56928 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:30:03.983620 systemd-logind[1586]: Removed session 21. Jan 23 18:30:04.046000 audit[5430]: USER_ACCT pid=5430 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:04.048838 sshd[5430]: Accepted publickey for core from 10.0.0.1 port 56928 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:30:04.048000 audit[5430]: CRED_ACQ pid=5430 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:04.048000 audit[5430]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe5ef10d0 a2=3 a3=0 items=0 ppid=1 pid=5430 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:30:04.048000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:30:04.051911 sshd-session[5430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:30:04.059746 systemd-logind[1586]: New session 22 of user core. Jan 23 18:30:04.077591 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 18:30:04.079000 audit[5430]: USER_START pid=5430 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:04.082000 audit[5434]: CRED_ACQ pid=5434 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:04.398031 sshd[5434]: Connection closed by 10.0.0.1 port 56928 Jan 23 18:30:04.399617 sshd-session[5430]: pam_unix(sshd:session): session closed for user core Jan 23 18:30:04.400000 audit[5430]: USER_END pid=5430 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:04.400000 audit[5430]: CRED_DISP pid=5430 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:04.410941 systemd[1]: sshd@20-10.0.0.29:22-10.0.0.1:56928.service: Deactivated successfully. Jan 23 18:30:04.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.29:22-10.0.0.1:56928 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:30:04.413189 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 18:30:04.414539 systemd-logind[1586]: Session 22 logged out. Waiting for processes to exit. Jan 23 18:30:04.418338 systemd[1]: Started sshd@21-10.0.0.29:22-10.0.0.1:56934.service - OpenSSH per-connection server daemon (10.0.0.1:56934). Jan 23 18:30:04.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.29:22-10.0.0.1:56934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:30:04.419168 systemd-logind[1586]: Removed session 22. Jan 23 18:30:04.512000 audit[5445]: USER_ACCT pid=5445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:04.514319 sshd[5445]: Accepted publickey for core from 10.0.0.1 port 56934 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:30:04.514000 audit[5445]: CRED_ACQ pid=5445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:04.514000 audit[5445]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffccb01a3f0 a2=3 a3=0 items=0 ppid=1 pid=5445 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:30:04.514000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:30:04.517817 sshd-session[5445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:30:04.526513 systemd-logind[1586]: New session 23 of user core. Jan 23 18:30:04.540650 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 18:30:04.544000 audit[5445]: USER_START pid=5445 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:04.547000 audit[5449]: CRED_ACQ pid=5449 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:05.277000 audit[5460]: NETFILTER_CFG table=filter:138 family=2 entries=26 op=nft_register_rule pid=5460 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:30:05.277000 audit[5460]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffed8c913b0 a2=0 a3=7ffed8c9139c items=0 ppid=3005 pid=5460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:30:05.277000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:30:05.288220 sshd[5449]: Connection closed by 10.0.0.1 port 56934 Jan 23 18:30:05.288778 sshd-session[5445]: pam_unix(sshd:session): session closed for user core Jan 23 18:30:05.290000 audit[5445]: USER_END pid=5445 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:05.290000 audit[5445]: CRED_DISP pid=5445 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:05.291000 audit[5460]: NETFILTER_CFG table=nat:139 family=2 entries=20 op=nft_register_rule pid=5460 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:30:05.291000 audit[5460]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffed8c913b0 a2=0 a3=0 items=0 ppid=3005 pid=5460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:30:05.291000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:30:05.300844 systemd[1]: sshd@21-10.0.0.29:22-10.0.0.1:56934.service: Deactivated successfully. Jan 23 18:30:05.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.29:22-10.0.0.1:56934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:30:05.305137 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 18:30:05.310147 systemd-logind[1586]: Session 23 logged out. Waiting for processes to exit. Jan 23 18:30:05.318944 systemd-logind[1586]: Removed session 23. Jan 23 18:30:05.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.29:22-10.0.0.1:56944 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:30:05.324947 systemd[1]: Started sshd@22-10.0.0.29:22-10.0.0.1:56944.service - OpenSSH per-connection server daemon (10.0.0.1:56944). Jan 23 18:30:05.331000 audit[5466]: NETFILTER_CFG table=filter:140 family=2 entries=38 op=nft_register_rule pid=5466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:30:05.331000 audit[5466]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffdf029c410 a2=0 a3=7ffdf029c3fc items=0 ppid=3005 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:30:05.331000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:30:05.341000 audit[5466]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=5466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:30:05.341000 audit[5466]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdf029c410 a2=0 a3=0 items=0 ppid=3005 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:30:05.341000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:30:05.395000 audit[5467]: USER_ACCT pid=5467 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:05.397779 sshd[5467]: Accepted publickey for core from 10.0.0.1 port 56944 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:30:05.397000 audit[5467]: CRED_ACQ pid=5467 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:05.397000 audit[5467]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd5aa4600 a2=3 a3=0 items=0 ppid=1 pid=5467 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:30:05.397000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:30:05.400792 sshd-session[5467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:30:05.409044 systemd-logind[1586]: New session 24 of user core. Jan 23 18:30:05.425721 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 18:30:05.428000 audit[5467]: USER_START pid=5467 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:05.432000 audit[5471]: CRED_ACQ pid=5471 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:05.685790 sshd[5471]: Connection closed by 10.0.0.1 port 56944 Jan 23 18:30:05.686858 sshd-session[5467]: pam_unix(sshd:session): session closed for user core Jan 23 18:30:05.690000 audit[5467]: USER_END pid=5467 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:05.690000 audit[5467]: CRED_DISP pid=5467 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:05.697461 systemd[1]: sshd@22-10.0.0.29:22-10.0.0.1:56944.service: Deactivated successfully. Jan 23 18:30:05.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.29:22-10.0.0.1:56944 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:30:05.701233 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 18:30:05.704272 systemd-logind[1586]: Session 24 logged out. Waiting for processes to exit. Jan 23 18:30:05.709107 systemd[1]: Started sshd@23-10.0.0.29:22-10.0.0.1:56960.service - OpenSSH per-connection server daemon (10.0.0.1:56960). Jan 23 18:30:05.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.29:22-10.0.0.1:56960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:30:05.714211 systemd-logind[1586]: Removed session 24. Jan 23 18:30:05.799000 audit[5483]: USER_ACCT pid=5483 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:05.801743 sshd[5483]: Accepted publickey for core from 10.0.0.1 port 56960 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:30:05.802000 audit[5483]: CRED_ACQ pid=5483 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:05.802000 audit[5483]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffced721780 a2=3 a3=0 items=0 ppid=1 pid=5483 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:30:05.802000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:30:05.806075 sshd-session[5483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:30:05.816635 systemd-logind[1586]: New session 25 of user core. Jan 23 18:30:05.829992 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 18:30:05.833000 audit[5483]: USER_START pid=5483 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:05.837000 audit[5487]: CRED_ACQ pid=5487 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:05.981791 sshd[5487]: Connection closed by 10.0.0.1 port 56960 Jan 23 18:30:05.982226 sshd-session[5483]: pam_unix(sshd:session): session closed for user core Jan 23 18:30:05.983000 audit[5483]: USER_END pid=5483 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:05.983000 audit[5483]: CRED_DISP pid=5483 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:05.991676 systemd[1]: sshd@23-10.0.0.29:22-10.0.0.1:56960.service: Deactivated successfully. Jan 23 18:30:05.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.29:22-10.0.0.1:56960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:30:05.996307 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 18:30:05.998504 systemd-logind[1586]: Session 25 logged out. Waiting for processes to exit. Jan 23 18:30:06.001173 systemd-logind[1586]: Removed session 25. Jan 23 18:30:08.413125 kubelet[2842]: E0123 18:30:08.412956 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6469486c9-qrngm" podUID="fd8c84c1-3db5-46bc-b232-d92330035bbc" Jan 23 18:30:09.413429 kubelet[2842]: E0123 18:30:09.413274 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6469486c9-vgp5c" podUID="45c79f90-5bfc-4e7b-ac61-b9e42301e7a5" Jan 23 18:30:09.414170 kubelet[2842]: E0123 18:30:09.413969 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qrzq4" podUID="f4756753-32cd-49e4-a9ac-3b64d97f5679" Jan 23 18:30:10.416913 kubelet[2842]: E0123 18:30:10.416869 2842 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:30:10.419213 kubelet[2842]: E0123 18:30:10.417828 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f6cd769bc-pxzdx" podUID="7bfb42fc-77fc-4491-a374-12534b8ba3b1" Jan 23 18:30:10.419357 containerd[1632]: time="2026-01-23T18:30:10.417979985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:30:10.518262 containerd[1632]: time="2026-01-23T18:30:10.518024118Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:30:10.520152 containerd[1632]: time="2026-01-23T18:30:10.520007824Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:30:10.520152 containerd[1632]: time="2026-01-23T18:30:10.520133589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 18:30:10.520502 kubelet[2842]: E0123 18:30:10.520353 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:30:10.520502 kubelet[2842]: E0123 18:30:10.520475 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:30:10.520653 kubelet[2842]: E0123 18:30:10.520606 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sbrtc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f89f6994b-gxllw_calico-system(720b9cd4-1750-46fd-95a5-f9417f9523f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:30:10.522635 kubelet[2842]: E0123 18:30:10.522518 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f89f6994b-gxllw" podUID="720b9cd4-1750-46fd-95a5-f9417f9523f5" Jan 23 18:30:11.001873 systemd[1]: Started sshd@24-10.0.0.29:22-10.0.0.1:56972.service - OpenSSH per-connection server daemon (10.0.0.1:56972). Jan 23 18:30:11.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.29:22-10.0.0.1:56972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:30:11.004842 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 23 18:30:11.004922 kernel: audit: type=1130 audit(1769193011.000:915): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.29:22-10.0.0.1:56972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:30:11.092000 audit[5500]: USER_ACCT pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:11.094045 sshd[5500]: Accepted publickey for core from 10.0.0.1 port 56972 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:30:11.102558 sshd-session[5500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:30:11.097000 audit[5500]: CRED_ACQ pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:11.112582 systemd-logind[1586]: New session 26 of user core. Jan 23 18:30:11.120902 kernel: audit: type=1101 audit(1769193011.092:916): pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:11.120981 kernel: audit: type=1103 audit(1769193011.097:917): pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:11.121013 kernel: audit: type=1006 audit(1769193011.097:918): pid=5500 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 23 18:30:11.128771 kernel: audit: type=1300 audit(1769193011.097:918): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe8544aa10 a2=3 a3=0 items=0 ppid=1 pid=5500 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:30:11.097000 audit[5500]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe8544aa10 a2=3 a3=0 items=0 ppid=1 pid=5500 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:30:11.097000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:30:11.160242 kernel: audit: type=1327 audit(1769193011.097:918): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:30:11.162173 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 18:30:11.166000 audit[5500]: USER_START pid=5500 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:11.169000 audit[5504]: CRED_ACQ pid=5504 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:11.194521 kernel: audit: type=1105 audit(1769193011.166:919): pid=5500 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:11.194862 kernel: audit: type=1103 audit(1769193011.169:920): pid=5504 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:11.315620 sshd[5504]: Connection closed by 10.0.0.1 port 56972 Jan 23 18:30:11.315753 sshd-session[5500]: pam_unix(sshd:session): session closed for user core Jan 23 18:30:11.316000 audit[5500]: USER_END pid=5500 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:11.334512 kernel: audit: type=1106 audit(1769193011.316:921): pid=5500 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:11.316000 audit[5500]: CRED_DISP pid=5500 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:11.339898 systemd[1]: sshd@24-10.0.0.29:22-10.0.0.1:56972.service: Deactivated successfully. Jan 23 18:30:11.340485 systemd-logind[1586]: Session 26 logged out. Waiting for processes to exit. Jan 23 18:30:11.347587 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 18:30:11.355035 kernel: audit: type=1104 audit(1769193011.316:922): pid=5500 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:11.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.29:22-10.0.0.1:56972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:30:11.373538 systemd-logind[1586]: Removed session 26. Jan 23 18:30:13.414718 kubelet[2842]: E0123 18:30:13.414663 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59cc6d476d-zc49f" podUID="b1a5247f-3dd0-4a60-b451-df40ad40b033" Jan 23 18:30:13.449000 audit[5523]: NETFILTER_CFG table=filter:142 family=2 entries=26 op=nft_register_rule pid=5523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:30:13.449000 audit[5523]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff5c86f0d0 a2=0 a3=7fff5c86f0bc items=0 ppid=3005 pid=5523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:30:13.449000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:30:13.460000 audit[5523]: NETFILTER_CFG table=nat:143 family=2 entries=104 op=nft_register_chain pid=5523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:30:13.460000 audit[5523]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fff5c86f0d0 a2=0 a3=7fff5c86f0bc items=0 ppid=3005 pid=5523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:30:13.460000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:30:16.329029 systemd[1]: Started sshd@25-10.0.0.29:22-10.0.0.1:60328.service - OpenSSH per-connection server daemon (10.0.0.1:60328). Jan 23 18:30:16.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.29:22-10.0.0.1:60328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:30:16.331749 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 23 18:30:16.331798 kernel: audit: type=1130 audit(1769193016.327:926): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.29:22-10.0.0.1:60328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:30:16.401000 audit[5527]: USER_ACCT pid=5527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:16.403662 sshd[5527]: Accepted publickey for core from 10.0.0.1 port 60328 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:30:16.406574 sshd-session[5527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:30:16.412920 systemd-logind[1586]: New session 27 of user core. Jan 23 18:30:16.403000 audit[5527]: CRED_ACQ pid=5527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:16.417495 kubelet[2842]: E0123 18:30:16.417358 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fbkc5" podUID="420164f1-10e4-4309-843a-9bf4c7513aff" Jan 23 18:30:16.426283 kernel: audit: type=1101 audit(1769193016.401:927): pid=5527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:16.426326 kernel: audit: type=1103 audit(1769193016.403:928): pid=5527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:16.426370 kernel: audit: type=1006 audit(1769193016.403:929): pid=5527 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 23 18:30:16.403000 audit[5527]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3852c580 a2=3 a3=0 items=0 ppid=1 pid=5527 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:30:16.444183 kernel: audit: type=1300 audit(1769193016.403:929): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3852c580 a2=3 a3=0 items=0 ppid=1 pid=5527 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:30:16.444254 kernel: audit: type=1327 audit(1769193016.403:929): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:30:16.403000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:30:16.452768 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 18:30:16.455000 audit[5527]: USER_START pid=5527 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:16.459000 audit[5531]: CRED_ACQ pid=5531 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:16.481567 kernel: audit: type=1105 audit(1769193016.455:930): pid=5527 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:16.481633 kernel: audit: type=1103 audit(1769193016.459:931): pid=5531 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:16.543247 sshd[5531]: Connection closed by 10.0.0.1 port 60328 Jan 23 18:30:16.543619 sshd-session[5527]: pam_unix(sshd:session): session closed for user core Jan 23 18:30:16.544000 audit[5527]: USER_END pid=5527 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:16.549253 systemd[1]: sshd@25-10.0.0.29:22-10.0.0.1:60328.service: Deactivated successfully. Jan 23 18:30:16.552699 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 18:30:16.554243 systemd-logind[1586]: Session 27 logged out. Waiting for processes to exit. Jan 23 18:30:16.556447 systemd-logind[1586]: Removed session 27. Jan 23 18:30:16.545000 audit[5527]: CRED_DISP pid=5527 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:16.575573 kernel: audit: type=1106 audit(1769193016.544:932): pid=5527 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:16.575634 kernel: audit: type=1104 audit(1769193016.545:933): pid=5527 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:16.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.29:22-10.0.0.1:60328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:30:19.413144 kubelet[2842]: E0123 18:30:19.413098 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6469486c9-qrngm" podUID="fd8c84c1-3db5-46bc-b232-d92330035bbc" Jan 23 18:30:21.413146 kubelet[2842]: E0123 18:30:21.413003 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qrzq4" podUID="f4756753-32cd-49e4-a9ac-3b64d97f5679" Jan 23 18:30:21.568866 systemd[1]: Started sshd@26-10.0.0.29:22-10.0.0.1:60330.service - OpenSSH per-connection server daemon (10.0.0.1:60330). Jan 23 18:30:21.581641 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:30:21.581793 kernel: audit: type=1130 audit(1769193021.567:935): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.29:22-10.0.0.1:60330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:30:21.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.29:22-10.0.0.1:60330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:30:21.686000 audit[5577]: USER_ACCT pid=5577 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:21.688943 sshd[5577]: Accepted publickey for core from 10.0.0.1 port 60330 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:30:21.691687 sshd-session[5577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:30:21.688000 audit[5577]: CRED_ACQ pid=5577 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:21.698250 systemd-logind[1586]: New session 28 of user core. Jan 23 18:30:21.707568 kernel: audit: type=1101 audit(1769193021.686:936): pid=5577 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:21.707675 kernel: audit: type=1103 audit(1769193021.688:937): pid=5577 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:21.707717 kernel: audit: type=1006 audit(1769193021.688:938): pid=5577 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 23 18:30:21.713492 kernel: audit: type=1300 audit(1769193021.688:938): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe253bc9f0 a2=3 a3=0 items=0 ppid=1 pid=5577 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:30:21.688000 audit[5577]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe253bc9f0 a2=3 a3=0 items=0 ppid=1 pid=5577 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:30:21.714737 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 23 18:30:21.688000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:30:21.729584 kernel: audit: type=1327 audit(1769193021.688:938): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:30:21.729642 kernel: audit: type=1105 audit(1769193021.717:939): pid=5577 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:21.717000 audit[5577]: USER_START pid=5577 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:21.744475 kernel: audit: type=1103 audit(1769193021.720:940): pid=5581 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:21.720000 audit[5581]: CRED_ACQ pid=5581 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:21.879223 sshd[5581]: Connection closed by 10.0.0.1 port 60330 Jan 23 18:30:21.879674 sshd-session[5577]: pam_unix(sshd:session): session closed for user core Jan 23 18:30:21.881000 audit[5577]: USER_END pid=5577 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:21.885998 systemd[1]: sshd@26-10.0.0.29:22-10.0.0.1:60330.service: Deactivated successfully. Jan 23 18:30:21.888763 systemd[1]: session-28.scope: Deactivated successfully. Jan 23 18:30:21.890302 systemd-logind[1586]: Session 28 logged out. Waiting for processes to exit. Jan 23 18:30:21.892300 systemd-logind[1586]: Removed session 28. Jan 23 18:30:21.881000 audit[5577]: CRED_DISP pid=5577 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:21.895504 kernel: audit: type=1106 audit(1769193021.881:941): pid=5577 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:21.895559 kernel: audit: type=1104 audit(1769193021.881:942): pid=5577 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:30:21.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.29:22-10.0.0.1:60330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:30:23.413636 containerd[1632]: time="2026-01-23T18:30:23.413562795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:30:23.478019 containerd[1632]: time="2026-01-23T18:30:23.477911025Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:30:23.480707 containerd[1632]: time="2026-01-23T18:30:23.480535848Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:30:23.480927 containerd[1632]: time="2026-01-23T18:30:23.480636135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:30:23.481121 kubelet[2842]: E0123 18:30:23.480964 2842 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:30:23.481121 kubelet[2842]: E0123 18:30:23.481020 2842 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:30:23.481901 kubelet[2842]: E0123 18:30:23.481261 2842 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qldmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6469486c9-vgp5c_calico-apiserver(45c79f90-5bfc-4e7b-ac61-b9e42301e7a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:30:23.482816 kubelet[2842]: E0123 18:30:23.482743 2842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6469486c9-vgp5c" podUID="45c79f90-5bfc-4e7b-ac61-b9e42301e7a5"