Jan 24 00:53:44.440696 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 21:38:55 -00 2026 Jan 24 00:53:44.440735 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ccc6714d5701627f00a0daea097f593263f2ea87c850869ae25db66d36e22877 Jan 24 00:53:44.440748 kernel: BIOS-provided physical RAM map: Jan 24 00:53:44.440761 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 24 00:53:44.440771 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 24 00:53:44.440781 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 24 00:53:44.440794 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 24 00:53:44.440802 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 24 00:53:44.440855 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 24 00:53:44.440863 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 24 00:53:44.440870 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 24 00:53:44.440881 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 24 00:53:44.440887 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 24 00:53:44.440894 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 24 00:53:44.440902 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 24 00:53:44.440961 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 24 00:53:44.441019 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 24 00:53:44.441028 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 24 00:53:44.441035 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 24 00:53:44.441042 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 24 00:53:44.441048 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 24 00:53:44.441055 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 24 00:53:44.441062 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 24 00:53:44.441069 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 00:53:44.441075 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 24 00:53:44.441082 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 24 00:53:44.441092 kernel: NX (Execute Disable) protection: active Jan 24 00:53:44.441099 kernel: APIC: Static calls initialized Jan 24 00:53:44.441106 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 24 00:53:44.441174 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 24 00:53:44.441181 kernel: extended physical RAM map: Jan 24 00:53:44.441188 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 24 00:53:44.441195 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 24 00:53:44.441202 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 24 00:53:44.441208 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 24 00:53:44.441215 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 24 00:53:44.441222 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 24 00:53:44.441233 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 24 00:53:44.441240 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 24 00:53:44.441247 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 24 00:53:44.441258 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 24 00:53:44.441268 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 24 00:53:44.441275 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 24 00:53:44.441283 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 24 00:53:44.441290 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 24 00:53:44.441297 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 24 00:53:44.441304 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 24 00:53:44.441311 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 24 00:53:44.441318 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 24 00:53:44.441325 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 24 00:53:44.441335 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 24 00:53:44.441342 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 24 00:53:44.441349 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 24 00:53:44.441356 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 24 00:53:44.441364 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 24 00:53:44.441371 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 00:53:44.441378 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 24 00:53:44.441385 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 24 00:53:44.441420 kernel: efi: EFI v2.7 by EDK II Jan 24 00:53:44.441428 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 24 00:53:44.441460 kernel: random: crng init done Jan 24 00:53:44.441471 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 24 00:53:44.441503 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 24 00:53:44.441510 kernel: secureboot: Secure boot disabled Jan 24 00:53:44.441517 kernel: SMBIOS 2.8 present. Jan 24 00:53:44.441524 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 24 00:53:44.441532 kernel: DMI: Memory slots populated: 1/1 Jan 24 00:53:44.441539 kernel: Hypervisor detected: KVM Jan 24 00:53:44.441546 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 24 00:53:44.441553 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:53:44.441560 kernel: kvm-clock: using sched offset of 23817934098 cycles Jan 24 00:53:44.441568 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:53:44.441579 kernel: tsc: Detected 2445.426 MHz processor Jan 24 00:53:44.441589 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:53:44.441602 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:53:44.441616 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 24 00:53:44.441627 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 24 00:53:44.441637 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:53:44.441648 kernel: Using GB pages for direct mapping Jan 24 00:53:44.441662 kernel: ACPI: Early table checksum verification disabled Jan 24 00:53:44.441673 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 24 00:53:44.441686 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 24 00:53:44.441700 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:53:44.441710 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:53:44.441721 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 24 00:53:44.441731 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:53:44.441746 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:53:44.441757 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:53:44.441771 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:53:44.441784 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 24 00:53:44.441837 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 24 00:53:44.441851 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 24 00:53:44.441864 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 24 00:53:44.441880 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 24 00:53:44.441891 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 24 00:53:44.441901 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 24 00:53:44.442272 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 24 00:53:44.442287 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 24 00:53:44.442298 kernel: No NUMA configuration found Jan 24 00:53:44.442310 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 24 00:53:44.442321 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 24 00:53:44.442369 kernel: Zone ranges: Jan 24 00:53:44.442381 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:53:44.442393 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 24 00:53:44.442404 kernel: Normal empty Jan 24 00:53:44.442415 kernel: Device empty Jan 24 00:53:44.442427 kernel: Movable zone start for each node Jan 24 00:53:44.442438 kernel: Early memory node ranges Jan 24 00:53:44.442453 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 24 00:53:44.442497 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 24 00:53:44.442509 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 24 00:53:44.442521 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 24 00:53:44.442532 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 24 00:53:44.442544 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 24 00:53:44.442555 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 24 00:53:44.442567 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 24 00:53:44.442616 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 24 00:53:44.442628 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:53:44.442651 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 24 00:53:44.442665 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 24 00:53:44.442679 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:53:44.442690 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 24 00:53:44.442701 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 24 00:53:44.442712 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 24 00:53:44.442723 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 24 00:53:44.442740 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 24 00:53:44.442754 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 00:53:44.442766 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:53:44.443095 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:53:44.443178 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 00:53:44.443192 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:53:44.443205 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:53:44.443220 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:53:44.443231 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:53:44.443243 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:53:44.443253 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:53:44.443269 kernel: TSC deadline timer available Jan 24 00:53:44.443280 kernel: CPU topo: Max. logical packages: 1 Jan 24 00:53:44.443294 kernel: CPU topo: Max. logical dies: 1 Jan 24 00:53:44.443307 kernel: CPU topo: Max. dies per package: 1 Jan 24 00:53:44.443320 kernel: CPU topo: Max. threads per core: 1 Jan 24 00:53:44.443724 kernel: CPU topo: Num. cores per package: 4 Jan 24 00:53:44.443747 kernel: CPU topo: Num. threads per package: 4 Jan 24 00:53:44.443768 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 24 00:53:44.443779 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:53:44.443790 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 24 00:53:44.443803 kernel: kvm-guest: setup PV sched yield Jan 24 00:53:44.443817 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 24 00:53:44.443829 kernel: Booting paravirtualized kernel on KVM Jan 24 00:53:44.443840 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:53:44.443851 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 24 00:53:44.443866 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 24 00:53:44.443881 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 24 00:53:44.443894 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 24 00:53:44.443908 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:53:44.443978 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:53:44.444037 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ccc6714d5701627f00a0daea097f593263f2ea87c850869ae25db66d36e22877 Jan 24 00:53:44.444059 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:53:44.444073 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:53:44.444086 kernel: Fallback order for Node 0: 0 Jan 24 00:53:44.444095 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 24 00:53:44.444104 kernel: Policy zone: DMA32 Jan 24 00:53:44.444182 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:53:44.444191 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 24 00:53:44.444202 kernel: ftrace: allocating 40128 entries in 157 pages Jan 24 00:53:44.444210 kernel: ftrace: allocated 157 pages with 5 groups Jan 24 00:53:44.444218 kernel: Dynamic Preempt: voluntary Jan 24 00:53:44.444266 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:53:44.444278 kernel: rcu: RCU event tracing is enabled. Jan 24 00:53:44.444387 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 24 00:53:44.444396 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:53:44.444500 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:53:44.444509 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:53:44.444516 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:53:44.444524 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 24 00:53:44.445105 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:53:44.445261 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:53:44.445270 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:53:44.445279 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 24 00:53:44.445292 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:53:44.445300 kernel: Console: colour dummy device 80x25 Jan 24 00:53:44.445307 kernel: printk: legacy console [ttyS0] enabled Jan 24 00:53:44.445315 kernel: ACPI: Core revision 20240827 Jan 24 00:53:44.445323 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 00:53:44.445331 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:53:44.445339 kernel: x2apic enabled Jan 24 00:53:44.445350 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:53:44.445358 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 24 00:53:44.445366 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 24 00:53:44.445374 kernel: kvm-guest: setup PV IPIs Jan 24 00:53:44.445382 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 00:53:44.445390 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 24 00:53:44.445398 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 24 00:53:44.445409 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 00:53:44.445417 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 00:53:44.445425 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 00:53:44.445433 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:53:44.445441 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:53:44.445449 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:53:44.445457 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:53:44.445467 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 24 00:53:44.445476 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 24 00:53:44.445515 kernel: active return thunk: srso_alias_return_thunk Jan 24 00:53:44.445524 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 24 00:53:44.445532 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 00:53:44.445540 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:53:44.445548 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:53:44.445559 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:53:44.445567 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:53:44.445575 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:53:44.445583 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 24 00:53:44.445591 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:53:44.445599 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:53:44.445606 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 24 00:53:44.445617 kernel: landlock: Up and running. Jan 24 00:53:44.445625 kernel: SELinux: Initializing. Jan 24 00:53:44.445633 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:53:44.445641 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:53:44.445649 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 24 00:53:44.445657 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 24 00:53:44.445665 kernel: signal: max sigframe size: 1776 Jan 24 00:53:44.445675 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:53:44.445683 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:53:44.445692 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 24 00:53:44.445699 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:53:44.445708 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:53:44.445715 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:53:44.445723 kernel: .... node #0, CPUs: #1 #2 #3 Jan 24 00:53:44.445733 kernel: smp: Brought up 1 node, 4 CPUs Jan 24 00:53:44.445741 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 24 00:53:44.445750 kernel: Memory: 2439052K/2565800K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15540K init, 2496K bss, 120812K reserved, 0K cma-reserved) Jan 24 00:53:44.445758 kernel: devtmpfs: initialized Jan 24 00:53:44.445765 kernel: x86/mm: Memory block size: 128MB Jan 24 00:53:44.445773 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 24 00:53:44.445781 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 24 00:53:44.445792 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 24 00:53:44.445800 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 24 00:53:44.445808 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 24 00:53:44.445816 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 24 00:53:44.445824 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:53:44.445832 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 24 00:53:44.445839 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:53:44.445850 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:53:44.445858 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:53:44.445866 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:53:44.445874 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:53:44.445882 kernel: audit: type=2000 audit(1769216013.523:1): state=initialized audit_enabled=0 res=1 Jan 24 00:53:44.445889 kernel: cpuidle: using governor menu Jan 24 00:53:44.445897 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:53:44.445950 kernel: dca service started, version 1.12.1 Jan 24 00:53:44.445965 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 24 00:53:44.445979 kernel: PCI: Using configuration type 1 for base access Jan 24 00:53:44.445987 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:53:44.445995 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:53:44.446003 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:53:44.446011 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:53:44.446070 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:53:44.446306 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:53:44.446315 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:53:44.446323 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:53:44.446330 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:53:44.446338 kernel: ACPI: Interpreter enabled Jan 24 00:53:44.446346 kernel: ACPI: PM: (supports S0 S3 S5) Jan 24 00:53:44.446358 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:53:44.446366 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:53:44.446374 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:53:44.446382 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 00:53:44.446390 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:53:44.446895 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:53:44.447356 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 00:53:44.447643 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 00:53:44.447662 kernel: PCI host bridge to bus 0000:00 Jan 24 00:53:44.448024 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:53:44.448394 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:53:44.448667 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:53:44.449006 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 24 00:53:44.449376 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 24 00:53:44.449646 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 24 00:53:44.449970 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:53:44.450386 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 24 00:53:44.450699 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 24 00:53:44.451065 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 24 00:53:44.451451 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 24 00:53:44.451737 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 24 00:53:44.452090 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:53:44.452491 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 24 00:53:44.452788 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 24 00:53:44.453227 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 24 00:53:44.453521 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 24 00:53:44.453828 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 24 00:53:44.454270 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 24 00:53:44.454561 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 24 00:53:44.454870 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 24 00:53:44.455331 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 24 00:53:44.455625 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 24 00:53:44.455965 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 24 00:53:44.456627 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 24 00:53:44.457829 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 24 00:53:44.458300 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 24 00:53:44.458591 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 00:53:44.458892 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 24 00:53:44.459345 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 24 00:53:44.459640 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 24 00:53:44.460015 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 24 00:53:44.460401 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 24 00:53:44.460421 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:53:44.460436 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:53:44.460448 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:53:44.460460 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:53:44.460479 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 00:53:44.460493 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 00:53:44.460505 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 00:53:44.460516 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 00:53:44.460528 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 00:53:44.460540 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 00:53:44.460553 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 00:53:44.460569 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 00:53:44.460581 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 00:53:44.460594 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 00:53:44.460607 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 00:53:44.460620 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 00:53:44.460633 kernel: iommu: Default domain type: Translated Jan 24 00:53:44.460646 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:53:44.460665 kernel: efivars: Registered efivars operations Jan 24 00:53:44.460677 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:53:44.460688 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:53:44.460700 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 24 00:53:44.460711 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 24 00:53:44.460725 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 24 00:53:44.460738 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 24 00:53:44.460749 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 24 00:53:44.460765 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 24 00:53:44.460778 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 24 00:53:44.460793 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 24 00:53:44.461241 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 00:53:44.461532 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 00:53:44.461815 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:53:44.461842 kernel: vgaarb: loaded Jan 24 00:53:44.461857 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 00:53:44.461869 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 00:53:44.461880 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:53:44.461891 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:53:44.461905 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:53:44.461969 kernel: pnp: PnP ACPI init Jan 24 00:53:44.462401 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 24 00:53:44.462421 kernel: pnp: PnP ACPI: found 6 devices Jan 24 00:53:44.462434 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:53:44.462450 kernel: NET: Registered PF_INET protocol family Jan 24 00:53:44.462461 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:53:44.462472 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 00:53:44.462484 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:53:44.462504 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:53:44.462545 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 00:53:44.462561 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 00:53:44.462572 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:53:44.462583 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:53:44.462598 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:53:44.462615 kernel: NET: Registered PF_XDP protocol family Jan 24 00:53:44.462906 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 24 00:53:44.463350 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 24 00:53:44.463629 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:53:44.463901 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:53:44.464336 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:53:44.464596 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 24 00:53:44.464871 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 24 00:53:44.465287 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 24 00:53:44.465313 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:53:44.465325 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 24 00:53:44.465337 kernel: Initialise system trusted keyrings Jan 24 00:53:44.465348 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 00:53:44.465361 kernel: Key type asymmetric registered Jan 24 00:53:44.465381 kernel: Asymmetric key parser 'x509' registered Jan 24 00:53:44.465392 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 24 00:53:44.465404 kernel: io scheduler mq-deadline registered Jan 24 00:53:44.465417 kernel: io scheduler kyber registered Jan 24 00:53:44.465431 kernel: io scheduler bfq registered Jan 24 00:53:44.465442 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:53:44.465455 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 00:53:44.465474 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 00:53:44.465487 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 24 00:53:44.465503 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:53:44.465515 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:53:44.465533 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:53:44.465546 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:53:44.465559 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:53:44.465861 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 24 00:53:44.466329 kernel: rtc_cmos 00:04: registered as rtc0 Jan 24 00:53:44.466612 kernel: rtc_cmos 00:04: setting system clock to 2026-01-24T00:53:40 UTC (1769216020) Jan 24 00:53:44.466891 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 24 00:53:44.466980 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 24 00:53:44.466993 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 00:53:44.467006 kernel: efifb: probing for efifb Jan 24 00:53:44.467019 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 24 00:53:44.467031 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 24 00:53:44.467043 kernel: efifb: scrolling: redraw Jan 24 00:53:44.467055 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 24 00:53:44.467072 kernel: Console: switching to colour frame buffer device 160x50 Jan 24 00:53:44.467085 kernel: fb0: EFI VGA frame buffer device Jan 24 00:53:44.467100 kernel: pstore: Using crash dump compression: deflate Jan 24 00:53:44.467193 kernel: pstore: Registered efi_pstore as persistent store backend Jan 24 00:53:44.467207 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:53:44.467219 kernel: Segment Routing with IPv6 Jan 24 00:53:44.467231 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:53:44.467249 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:53:44.467261 kernel: Key type dns_resolver registered Jan 24 00:53:44.467273 kernel: IPI shorthand broadcast: enabled Jan 24 00:53:44.467287 kernel: sched_clock: Marking stable (7560040173, 1967116258)->(10439702710, -912546279) Jan 24 00:53:44.467299 kernel: registered taskstats version 1 Jan 24 00:53:44.467310 kernel: Loading compiled-in X.509 certificates Jan 24 00:53:44.467322 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 08600fac738f210e3b32f727339edfe2b1af2e3d' Jan 24 00:53:44.467338 kernel: Demotion targets for Node 0: null Jan 24 00:53:44.467350 kernel: Key type .fscrypt registered Jan 24 00:53:44.467362 kernel: Key type fscrypt-provisioning registered Jan 24 00:53:44.467374 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:53:44.467386 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:53:44.467398 kernel: ima: No architecture policies found Jan 24 00:53:44.467410 kernel: clk: Disabling unused clocks Jan 24 00:53:44.467426 kernel: Freeing unused kernel image (initmem) memory: 15540K Jan 24 00:53:44.467438 kernel: Write protecting the kernel read-only data: 47104k Jan 24 00:53:44.467450 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 24 00:53:44.467462 kernel: Run /init as init process Jan 24 00:53:44.467473 kernel: with arguments: Jan 24 00:53:44.467486 kernel: /init Jan 24 00:53:44.467498 kernel: with environment: Jan 24 00:53:44.467514 kernel: HOME=/ Jan 24 00:53:44.467529 kernel: TERM=linux Jan 24 00:53:44.467540 kernel: SCSI subsystem initialized Jan 24 00:53:44.467551 kernel: libata version 3.00 loaded. Jan 24 00:53:44.467846 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 00:53:44.467867 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 00:53:44.468300 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 24 00:53:44.468594 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 24 00:53:44.468880 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 00:53:44.469360 kernel: scsi host0: ahci Jan 24 00:53:44.469684 kernel: scsi host1: ahci Jan 24 00:53:44.470057 kernel: scsi host2: ahci Jan 24 00:53:44.470464 kernel: scsi host3: ahci Jan 24 00:53:44.470783 kernel: scsi host4: ahci Jan 24 00:53:44.471236 kernel: scsi host5: ahci Jan 24 00:53:44.471257 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Jan 24 00:53:44.471269 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Jan 24 00:53:44.471282 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Jan 24 00:53:44.471296 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Jan 24 00:53:44.471317 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Jan 24 00:53:44.471329 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Jan 24 00:53:44.471341 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 00:53:44.471353 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 24 00:53:44.471367 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 00:53:44.471380 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 00:53:44.471393 kernel: ata3.00: LPM support broken, forcing max_power Jan 24 00:53:44.471411 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 24 00:53:44.471424 kernel: ata3.00: applying bridge limits Jan 24 00:53:44.471437 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 00:53:44.471450 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 00:53:44.471463 kernel: ata3.00: LPM support broken, forcing max_power Jan 24 00:53:44.471479 kernel: ata3.00: configured for UDMA/100 Jan 24 00:53:44.471807 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 24 00:53:44.472304 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 24 00:53:44.472592 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 24 00:53:44.472611 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:53:44.472625 kernel: GPT:16515071 != 27000831 Jan 24 00:53:44.472638 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:53:44.472651 kernel: GPT:16515071 != 27000831 Jan 24 00:53:44.472669 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:53:44.472683 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:53:44.473055 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 24 00:53:44.473077 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:53:44.473506 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 24 00:53:44.473526 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:53:44.473548 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:53:44.473562 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 24 00:53:44.473574 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 24 00:53:44.473585 kernel: raid6: avx2x4 gen() 18512 MB/s Jan 24 00:53:44.473597 kernel: raid6: avx2x2 gen() 21435 MB/s Jan 24 00:53:44.473612 kernel: raid6: avx2x1 gen() 23132 MB/s Jan 24 00:53:44.473624 kernel: raid6: using algorithm avx2x1 gen() 23132 MB/s Jan 24 00:53:44.473636 kernel: raid6: .... xor() 11556 MB/s, rmw enabled Jan 24 00:53:44.473652 kernel: raid6: using avx2x2 recovery algorithm Jan 24 00:53:44.473667 kernel: xor: automatically using best checksumming function avx Jan 24 00:53:44.473680 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:53:44.473692 kernel: BTRFS: device fsid 091bfa4a-922a-4e6e-abc1-a4b74083975f devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (182) Jan 24 00:53:44.473703 kernel: BTRFS info (device dm-0): first mount of filesystem 091bfa4a-922a-4e6e-abc1-a4b74083975f Jan 24 00:53:44.473716 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:53:44.473730 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:53:44.473751 kernel: BTRFS info (device dm-0): enabling free space tree Jan 24 00:53:44.473764 kernel: loop: module loaded Jan 24 00:53:44.473775 kernel: loop0: detected capacity change from 0 to 100560 Jan 24 00:53:44.473786 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:53:44.473801 systemd[1]: Successfully made /usr/ read-only. Jan 24 00:53:44.473819 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 24 00:53:44.473836 systemd[1]: Detected virtualization kvm. Jan 24 00:53:44.473848 systemd[1]: Detected architecture x86-64. Jan 24 00:53:44.473863 systemd[1]: Running in initrd. Jan 24 00:53:44.473875 systemd[1]: No hostname configured, using default hostname. Jan 24 00:53:44.473887 systemd[1]: Hostname set to . Jan 24 00:53:44.473900 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 24 00:53:44.473972 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:53:44.473986 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:53:44.474000 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:53:44.474013 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:53:44.474026 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:53:44.474039 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:53:44.474060 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:53:44.474073 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:53:44.474085 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:53:44.474099 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:53:44.474203 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 24 00:53:44.474220 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:53:44.474243 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:53:44.474256 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:53:44.474268 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:53:44.474280 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:53:44.474293 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:53:44.474308 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 24 00:53:44.474322 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:53:44.474339 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 24 00:53:44.474351 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:53:44.474366 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:53:44.474381 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:53:44.474393 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:53:44.474405 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:53:44.474418 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:53:44.474440 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:53:44.474453 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:53:44.474466 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 24 00:53:44.474478 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:53:44.474490 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:53:44.474505 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:53:44.474524 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:53:44.474539 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:53:44.474552 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:53:44.474613 systemd-journald[321]: Collecting audit messages is enabled. Jan 24 00:53:44.474647 kernel: audit: type=1130 audit(1769216024.442:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:44.474661 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:53:44.474675 kernel: audit: type=1130 audit(1769216024.463:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:44.474693 systemd-journald[321]: Journal started Jan 24 00:53:44.474719 systemd-journald[321]: Runtime Journal (/run/log/journal/cd960ba7dcdf4b57bf65964da888ef62) is 6M, max 48M, 42M free. Jan 24 00:53:44.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:44.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:44.484228 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:53:44.496783 kernel: audit: type=1130 audit(1769216024.492:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:44.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:44.498032 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:53:44.537524 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:53:44.569257 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:53:44.581518 systemd-modules-load[323]: Inserted module 'br_netfilter' Jan 24 00:53:44.582550 kernel: Bridge firewalling registered Jan 24 00:53:44.583206 systemd-tmpfiles[334]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 24 00:53:44.583693 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:53:44.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:44.604491 kernel: audit: type=1130 audit(1769216024.589:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:44.618648 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:53:44.655524 kernel: audit: type=1130 audit(1769216024.632:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:44.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:44.655809 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:53:44.683018 kernel: audit: type=1130 audit(1769216024.655:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:44.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:44.684709 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:53:44.720670 kernel: audit: type=1130 audit(1769216024.694:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:44.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:44.725718 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:53:44.736267 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:53:44.748906 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:53:44.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:44.807326 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:53:44.858408 kernel: audit: type=1130 audit(1769216024.807:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:44.858442 kernel: audit: type=1130 audit(1769216024.836:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:44.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:44.811792 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:53:44.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:44.837282 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:53:44.897516 kernel: audit: type=1130 audit(1769216024.873:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:44.880352 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:53:44.900000 audit: BPF prog-id=6 op=LOAD Jan 24 00:53:44.901816 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:53:44.988076 dracut-cmdline[357]: dracut-109 Jan 24 00:53:45.001676 dracut-cmdline[357]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ccc6714d5701627f00a0daea097f593263f2ea87c850869ae25db66d36e22877 Jan 24 00:53:45.065747 systemd-resolved[358]: Positive Trust Anchors: Jan 24 00:53:45.065799 systemd-resolved[358]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:53:45.065805 systemd-resolved[358]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 24 00:53:45.065832 systemd-resolved[358]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:53:45.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:45.116671 systemd-resolved[358]: Defaulting to hostname 'linux'. Jan 24 00:53:45.119017 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:53:45.130303 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:53:45.299511 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:53:45.339278 kernel: iscsi: registered transport (tcp) Jan 24 00:53:45.385486 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:53:45.385570 kernel: QLogic iSCSI HBA Driver Jan 24 00:53:45.468757 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:53:45.529232 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:53:45.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:45.543991 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:53:45.670457 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:53:45.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:45.684450 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:53:45.694087 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:53:45.776804 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:53:45.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:45.779000 audit: BPF prog-id=7 op=LOAD Jan 24 00:53:45.779000 audit: BPF prog-id=8 op=LOAD Jan 24 00:53:45.781705 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:53:45.844440 systemd-udevd[579]: Using default interface naming scheme 'v257'. Jan 24 00:53:45.882016 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:53:45.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:45.893356 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:53:45.953026 dracut-pre-trigger[629]: rd.md=0: removing MD RAID activation Jan 24 00:53:46.034533 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:53:46.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:46.049433 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:53:46.068667 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:53:46.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:46.077000 audit: BPF prog-id=9 op=LOAD Jan 24 00:53:46.078507 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:53:46.213216 systemd-networkd[728]: lo: Link UP Jan 24 00:53:46.213270 systemd-networkd[728]: lo: Gained carrier Jan 24 00:53:46.221317 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:53:46.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:46.221649 systemd[1]: Reached target network.target - Network. Jan 24 00:53:46.341711 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:53:46.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:46.477215 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 24 00:53:46.505066 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 24 00:53:46.535671 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 24 00:53:46.578494 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:53:46.607305 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:53:46.627593 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:53:46.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:46.627778 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:53:46.637268 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:53:46.673523 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:53:46.691368 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:53:46.701047 disk-uuid[765]: Primary Header is updated. Jan 24 00:53:46.701047 disk-uuid[765]: Secondary Entries is updated. Jan 24 00:53:46.701047 disk-uuid[765]: Secondary Header is updated. Jan 24 00:53:46.728222 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:53:46.779555 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:53:46.779779 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:53:46.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:46.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:46.813428 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:53:46.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:46.863286 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:53:46.897896 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:53:46.904398 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:53:46.944698 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 24 00:53:46.944740 kernel: AES CTR mode by8 optimization enabled Jan 24 00:53:46.912496 systemd-networkd[728]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 00:53:46.912505 systemd-networkd[728]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:53:46.916480 systemd-networkd[728]: eth0: Link UP Jan 24 00:53:46.916883 systemd-networkd[728]: eth0: Gained carrier Jan 24 00:53:46.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:46.916902 systemd-networkd[728]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 00:53:46.924278 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:53:46.939211 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:53:46.981393 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:53:46.998266 systemd-networkd[728]: eth0: DHCPv4 address 10.0.0.104/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:53:47.083386 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:53:47.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:47.841507 disk-uuid[767]: Warning: The kernel is still using the old partition table. Jan 24 00:53:47.841507 disk-uuid[767]: The new table will be used at the next reboot or after you Jan 24 00:53:47.841507 disk-uuid[767]: run partprobe(8) or kpartx(8) Jan 24 00:53:47.841507 disk-uuid[767]: The operation has completed successfully. Jan 24 00:53:47.865388 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:53:47.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:47.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:47.865661 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:53:47.885402 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:53:47.976271 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (865) Jan 24 00:53:47.976336 kernel: BTRFS info (device vda6): first mount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:53:47.976350 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:53:48.010333 kernel: BTRFS info (device vda6): turning on async discard Jan 24 00:53:48.010423 kernel: BTRFS info (device vda6): enabling free space tree Jan 24 00:53:48.036362 kernel: BTRFS info (device vda6): last unmount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:53:48.044341 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:53:48.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:48.046437 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:53:48.303732 ignition[884]: Ignition 2.24.0 Jan 24 00:53:48.305644 ignition[884]: Stage: fetch-offline Jan 24 00:53:48.306344 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:53:48.306379 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:53:48.306569 ignition[884]: parsed url from cmdline: "" Jan 24 00:53:48.306576 ignition[884]: no config URL provided Jan 24 00:53:48.306681 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:53:48.306700 ignition[884]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:53:48.306755 ignition[884]: op(1): [started] loading QEMU firmware config module Jan 24 00:53:48.306763 ignition[884]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 24 00:53:48.342471 ignition[884]: op(1): [finished] loading QEMU firmware config module Jan 24 00:53:48.651319 systemd-networkd[728]: eth0: Gained IPv6LL Jan 24 00:53:48.860085 ignition[884]: parsing config with SHA512: 93deb3d8c623c0e32fab1799839c1c308721a745253284970109d5957be39fbab1eec3d8043b9344642b3852da7f050bd924fedf961d1302eb6111eca479522d Jan 24 00:53:48.871801 unknown[884]: fetched base config from "system" Jan 24 00:53:48.873105 unknown[884]: fetched user config from "qemu" Jan 24 00:53:48.885420 ignition[884]: fetch-offline: fetch-offline passed Jan 24 00:53:48.885507 ignition[884]: Ignition finished successfully Jan 24 00:53:48.899260 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:53:48.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:48.907541 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 24 00:53:48.925093 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:53:49.113287 ignition[894]: Ignition 2.24.0 Jan 24 00:53:49.113358 ignition[894]: Stage: kargs Jan 24 00:53:49.113598 ignition[894]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:53:49.113614 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:53:49.141498 ignition[894]: kargs: kargs passed Jan 24 00:53:49.141610 ignition[894]: Ignition finished successfully Jan 24 00:53:49.154749 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:53:49.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:49.166528 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:53:49.225730 ignition[901]: Ignition 2.24.0 Jan 24 00:53:49.225791 ignition[901]: Stage: disks Jan 24 00:53:49.226046 ignition[901]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:53:49.226062 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:53:49.228102 ignition[901]: disks: disks passed Jan 24 00:53:49.228326 ignition[901]: Ignition finished successfully Jan 24 00:53:49.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:49.269855 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:53:49.281557 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:53:49.295506 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:53:49.311086 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:53:49.321668 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:53:49.332232 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:53:49.350593 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:53:49.495902 systemd-fsck[909]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 24 00:53:49.507674 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:53:49.548264 kernel: kauditd_printk_skb: 25 callbacks suppressed Jan 24 00:53:49.548297 kernel: audit: type=1130 audit(1769216029.514:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:49.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:49.556000 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:53:50.060527 kernel: EXT4-fs (vda9): mounted filesystem 4e30a7d6-83d2-471c-98e0-68a57c0656af r/w with ordered data mode. Quota mode: none. Jan 24 00:53:50.063326 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:53:50.072804 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:53:50.098379 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:53:50.119000 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:53:50.141631 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:53:50.141701 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:53:50.141743 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:53:50.176753 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:53:50.224549 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:53:50.257068 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (917) Jan 24 00:53:50.278615 kernel: BTRFS info (device vda6): first mount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:53:50.278684 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:53:50.302511 kernel: BTRFS info (device vda6): turning on async discard Jan 24 00:53:50.302589 kernel: BTRFS info (device vda6): enabling free space tree Jan 24 00:53:50.311095 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:53:50.822730 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:53:50.867918 kernel: audit: type=1130 audit(1769216030.832:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:50.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:50.873369 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:53:50.888508 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:53:50.957295 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:53:50.970867 kernel: BTRFS info (device vda6): last unmount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:53:51.017311 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:53:51.053097 kernel: audit: type=1130 audit(1769216031.029:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:51.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:51.063854 ignition[1014]: INFO : Ignition 2.24.0 Jan 24 00:53:51.063854 ignition[1014]: INFO : Stage: mount Jan 24 00:53:51.074044 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:53:51.074044 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:53:51.074044 ignition[1014]: INFO : mount: mount passed Jan 24 00:53:51.074044 ignition[1014]: INFO : Ignition finished successfully Jan 24 00:53:51.098626 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:53:51.136399 kernel: audit: type=1130 audit(1769216031.104:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:51.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:51.121401 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:53:51.187503 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:53:51.239260 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1026) Jan 24 00:53:51.256837 kernel: BTRFS info (device vda6): first mount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:53:51.256920 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:53:51.286231 kernel: BTRFS info (device vda6): turning on async discard Jan 24 00:53:51.286309 kernel: BTRFS info (device vda6): enabling free space tree Jan 24 00:53:51.291711 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:53:51.398102 ignition[1043]: INFO : Ignition 2.24.0 Jan 24 00:53:51.407686 ignition[1043]: INFO : Stage: files Jan 24 00:53:51.407686 ignition[1043]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:53:51.407686 ignition[1043]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:53:51.407686 ignition[1043]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:53:51.440051 ignition[1043]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:53:51.440051 ignition[1043]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:53:51.440051 ignition[1043]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:53:51.440051 ignition[1043]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:53:51.440051 ignition[1043]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:53:51.440051 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:53:51.440051 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 24 00:53:51.427033 unknown[1043]: wrote ssh authorized keys file for user: core Jan 24 00:53:51.566771 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:53:51.704244 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:53:51.704244 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:53:51.704244 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:53:51.704244 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:53:51.704244 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:53:51.704244 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:53:51.704244 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:53:51.704244 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:53:51.704244 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:53:51.798317 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:53:51.798317 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:53:51.798317 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:53:51.798317 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:53:51.798317 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:53:51.798317 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 24 00:53:52.086775 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 00:53:52.966531 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:53:52.966531 ignition[1043]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 00:53:52.998357 ignition[1043]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:53:52.998357 ignition[1043]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:53:52.998357 ignition[1043]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 00:53:52.998357 ignition[1043]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 24 00:53:52.998357 ignition[1043]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:53:52.998357 ignition[1043]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:53:52.998357 ignition[1043]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 24 00:53:52.998357 ignition[1043]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 24 00:53:53.123776 ignition[1043]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:53:53.161731 ignition[1043]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:53:53.174867 ignition[1043]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 24 00:53:53.174867 ignition[1043]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:53:53.174867 ignition[1043]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:53:53.174867 ignition[1043]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:53:53.174867 ignition[1043]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:53:53.174867 ignition[1043]: INFO : files: files passed Jan 24 00:53:53.174867 ignition[1043]: INFO : Ignition finished successfully Jan 24 00:53:53.336673 kernel: audit: type=1130 audit(1769216033.226:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:53.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:53.190896 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:53:53.231407 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:53:53.275564 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:53:53.360803 initrd-setup-root-after-ignition[1073]: grep: /sysroot/oem/oem-release: No such file or directory Jan 24 00:53:53.391549 kernel: audit: type=1130 audit(1769216033.366:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:53.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:53.360742 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:53:53.407257 kernel: audit: type=1131 audit(1769216033.391:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:53.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:53.407333 initrd-setup-root-after-ignition[1075]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:53:53.407333 initrd-setup-root-after-ignition[1075]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:53:53.455031 kernel: audit: type=1130 audit(1769216033.413:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:53.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:53.360894 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:53:53.466494 initrd-setup-root-after-ignition[1079]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:53:53.392741 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:53:53.447630 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:53:53.479041 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:53:53.629406 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:53:53.671526 kernel: audit: type=1130 audit(1769216033.638:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:53.671567 kernel: audit: type=1131 audit(1769216033.638:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:53.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:53.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:53.629693 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:53:53.639651 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:53:53.677741 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:53:53.685531 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:53:53.707522 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:53:53.782077 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:53:53.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:53.798662 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:53:53.868932 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:53:53.871347 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:53:53.884234 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:53:53.908764 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:53:53.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:53.921456 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:53:53.921742 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:53:53.940815 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:53:53.962793 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:53:53.971423 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:53:53.986208 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:53:54.015840 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:53:54.023595 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 24 00:53:54.041498 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:53:54.062354 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:53:54.085929 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:53:54.093744 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:53:54.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.094087 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:53:54.107538 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:53:54.107738 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:53:54.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.128650 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:53:54.136102 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:53:54.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.146062 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:53:54.146603 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:53:54.151630 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:53:54.151847 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:53:54.180457 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:53:54.182047 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:53:54.194384 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:53:54.209757 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:53:54.214108 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:53:54.223084 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:53:54.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.230515 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:53:54.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.230793 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:53:54.230950 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:53:54.247483 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:53:54.247631 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:53:54.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.263039 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 24 00:53:54.263360 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 24 00:53:54.318739 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:53:54.319021 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:53:54.336816 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:53:54.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.337010 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:53:54.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.355446 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:53:54.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.362859 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:53:54.500374 ignition[1100]: INFO : Ignition 2.24.0 Jan 24 00:53:54.500374 ignition[1100]: INFO : Stage: umount Jan 24 00:53:54.500374 ignition[1100]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:53:54.500374 ignition[1100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:53:54.500374 ignition[1100]: INFO : umount: umount passed Jan 24 00:53:54.500374 ignition[1100]: INFO : Ignition finished successfully Jan 24 00:53:54.363107 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:53:54.390692 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:53:54.406417 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:53:54.616926 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 24 00:53:54.617006 kernel: audit: type=1131 audit(1769216034.586:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.429713 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:53:54.661458 kernel: audit: type=1131 audit(1769216034.634:59): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.443554 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:53:54.690058 kernel: audit: type=1131 audit(1769216034.667:60): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.443731 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:53:54.467881 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:53:54.738506 kernel: audit: type=1131 audit(1769216034.715:61): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.468606 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:53:54.512882 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:53:54.514051 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:53:54.769097 kernel: audit: type=1131 audit(1769216034.754:62): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.514375 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:53:54.593810 systemd[1]: Stopped target network.target - Network. Jan 24 00:53:54.628780 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:53:54.628937 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:53:54.855463 kernel: audit: type=1130 audit(1769216034.810:63): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.855506 kernel: audit: type=1131 audit(1769216034.810:64): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.634877 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:53:54.635570 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:53:54.906618 kernel: audit: type=1131 audit(1769216034.873:65): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.668366 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:53:54.668557 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:53:54.960415 kernel: audit: type=1131 audit(1769216034.923:66): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.716505 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:53:54.716617 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:53:54.754550 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:53:55.048313 kernel: audit: type=1131 audit(1769216035.006:67): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:55.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.774782 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:53:55.049000 audit: BPF prog-id=6 op=UNLOAD Jan 24 00:53:55.051000 audit: BPF prog-id=9 op=UNLOAD Jan 24 00:53:54.793557 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:53:54.793736 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:53:55.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.811397 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:53:54.811601 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:53:54.904832 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:53:54.907788 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:53:54.925452 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:53:55.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:54.925633 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:53:55.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:55.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:55.052740 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 24 00:53:55.066904 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:53:55.067048 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:53:55.067251 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:53:55.067350 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:53:55.108674 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:53:55.109517 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:53:55.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:55.109634 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:53:55.146495 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:53:55.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:55.146608 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:53:55.153295 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:53:55.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:55.153386 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:53:55.165747 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:53:55.213675 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:53:55.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:55.214080 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:53:55.231304 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:53:55.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:55.231402 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:53:55.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:55.274946 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:53:55.275090 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:53:55.283781 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:53:55.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:55.283900 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:53:55.305466 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:53:55.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:55.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:55.305585 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:53:55.329625 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:53:55.329748 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:53:55.366673 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:53:55.380511 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 24 00:53:55.380660 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:53:55.397753 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:53:55.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:55.397864 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:53:55.422198 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:53:55.422323 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:53:55.461593 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:53:55.462574 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:53:55.532942 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:53:55.535405 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:53:55.547680 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:53:55.603461 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:53:55.665952 systemd[1]: Switching root. Jan 24 00:53:55.725806 systemd-journald[321]: Journal stopped Jan 24 00:53:59.299424 systemd-journald[321]: Received SIGTERM from PID 1 (systemd). Jan 24 00:53:59.299563 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:53:59.299585 kernel: SELinux: policy capability open_perms=1 Jan 24 00:53:59.299615 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:53:59.299638 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:53:59.299654 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:53:59.299672 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:53:59.299688 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:53:59.299712 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:53:59.299729 kernel: SELinux: policy capability userspace_initial_context=0 Jan 24 00:53:59.299750 systemd[1]: Successfully loaded SELinux policy in 147.472ms. Jan 24 00:53:59.299781 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 20.074ms. Jan 24 00:53:59.299807 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 24 00:53:59.299826 systemd[1]: Detected virtualization kvm. Jan 24 00:53:59.299843 systemd[1]: Detected architecture x86-64. Jan 24 00:53:59.299860 systemd[1]: Detected first boot. Jan 24 00:53:59.299878 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 24 00:53:59.299898 zram_generator::config[1144]: No configuration found. Jan 24 00:53:59.299917 kernel: Guest personality initialized and is inactive Jan 24 00:53:59.299937 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 24 00:53:59.299953 kernel: Initialized host personality Jan 24 00:53:59.299969 kernel: NET: Registered PF_VSOCK protocol family Jan 24 00:53:59.300070 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:53:59.300097 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:53:59.300222 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:53:59.300247 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:53:59.300280 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:53:59.300300 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:53:59.300320 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:53:59.300346 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:53:59.300369 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:53:59.300387 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:53:59.300405 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:53:59.300423 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:53:59.300441 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:53:59.300472 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:53:59.300490 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:53:59.300507 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:53:59.300526 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:53:59.300547 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:53:59.300566 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:53:59.300588 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:53:59.300607 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:53:59.300631 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:53:59.300653 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:53:59.300670 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:53:59.300687 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:53:59.300709 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:53:59.300727 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:53:59.300745 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 24 00:53:59.300766 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:53:59.300786 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:53:59.300804 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:53:59.300822 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:53:59.300843 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 24 00:53:59.300863 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 24 00:53:59.300881 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 24 00:53:59.300899 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:53:59.300917 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 24 00:53:59.300935 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 24 00:53:59.300953 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:53:59.301030 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:53:59.301054 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:53:59.301072 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:53:59.301096 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:53:59.301215 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:53:59.301238 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:53:59.301257 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:53:59.301285 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:53:59.301304 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:53:59.301321 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:53:59.301341 systemd[1]: Reached target machines.target - Containers. Jan 24 00:53:59.301359 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:53:59.301378 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:53:59.301397 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:53:59.301419 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:53:59.301437 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:53:59.301455 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:53:59.301473 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:53:59.301491 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:53:59.301508 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:53:59.301527 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:53:59.301549 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:53:59.301566 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:53:59.301584 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:53:59.301603 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:53:59.301622 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 24 00:53:59.301639 kernel: fuse: init (API version 7.41) Jan 24 00:53:59.301660 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:53:59.301678 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:53:59.301695 kernel: ACPI: bus type drm_connector registered Jan 24 00:53:59.301712 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:53:59.301730 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:53:59.301751 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 24 00:53:59.301769 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:53:59.301836 systemd-journald[1230]: Collecting audit messages is enabled. Jan 24 00:53:59.301867 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:53:59.301886 systemd-journald[1230]: Journal started Jan 24 00:53:59.301919 systemd-journald[1230]: Runtime Journal (/run/log/journal/cd960ba7dcdf4b57bf65964da888ef62) is 6M, max 48M, 42M free. Jan 24 00:53:58.417000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 24 00:53:59.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.152000 audit: BPF prog-id=14 op=UNLOAD Jan 24 00:53:59.152000 audit: BPF prog-id=13 op=UNLOAD Jan 24 00:53:59.155000 audit: BPF prog-id=15 op=LOAD Jan 24 00:53:59.155000 audit: BPF prog-id=16 op=LOAD Jan 24 00:53:59.155000 audit: BPF prog-id=17 op=LOAD Jan 24 00:53:59.289000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 24 00:53:59.289000 audit[1230]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fffb713b790 a2=4000 a3=0 items=0 ppid=1 pid=1230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:59.289000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 24 00:53:57.805659 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:53:57.843905 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 24 00:53:57.845249 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:53:57.846388 systemd[1]: systemd-journald.service: Consumed 1.567s CPU time. Jan 24 00:53:59.313938 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:53:59.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.322059 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:53:59.328247 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:53:59.335406 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:53:59.340714 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:53:59.347921 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:53:59.354268 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:53:59.359596 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:53:59.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.366314 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:53:59.377080 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:53:59.377558 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:53:59.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.385519 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:53:59.385849 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:53:59.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.394055 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:53:59.394683 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:53:59.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.400406 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:53:59.400731 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:53:59.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.407367 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:53:59.407705 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:53:59.412960 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:53:59.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.413568 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:53:59.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.419836 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:53:59.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.427284 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:53:59.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.438461 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:53:59.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.447042 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 24 00:53:59.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.471563 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:53:59.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.483692 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:53:59.495526 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 24 00:53:59.505931 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:53:59.528373 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:53:59.534735 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:53:59.535045 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:53:59.543349 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 24 00:53:59.549818 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:53:59.550231 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 24 00:53:59.553360 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:53:59.562719 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:53:59.571562 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:53:59.573969 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:53:59.581358 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:53:59.584348 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:53:59.588779 systemd-journald[1230]: Time spent on flushing to /var/log/journal/cd960ba7dcdf4b57bf65964da888ef62 is 30.325ms for 1194 entries. Jan 24 00:53:59.588779 systemd-journald[1230]: System Journal (/var/log/journal/cd960ba7dcdf4b57bf65964da888ef62) is 8M, max 163.5M, 155.5M free. Jan 24 00:53:59.648553 systemd-journald[1230]: Received client request to flush runtime journal. Jan 24 00:53:59.602282 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:53:59.611569 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:53:59.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.623320 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:53:59.630464 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:53:59.641882 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:53:59.653922 kernel: kauditd_printk_skb: 63 callbacks suppressed Jan 24 00:53:59.654227 kernel: audit: type=1130 audit(1769216039.649:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.651520 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:53:59.670266 kernel: loop1: detected capacity change from 0 to 50784 Jan 24 00:53:59.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.686415 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:53:59.700762 kernel: audit: type=1130 audit(1769216039.683:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.714283 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 24 00:53:59.722695 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:53:59.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.747875 kernel: audit: type=1130 audit(1769216039.728:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.777746 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:53:59.782690 kernel: loop2: detected capacity change from 0 to 229808 Jan 24 00:53:59.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.796737 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 24 00:53:59.790000 audit: BPF prog-id=18 op=LOAD Jan 24 00:53:59.815258 kernel: audit: type=1130 audit(1769216039.786:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.815367 kernel: audit: type=1334 audit(1769216039.790:133): prog-id=18 op=LOAD Jan 24 00:53:59.815410 kernel: audit: type=1334 audit(1769216039.790:134): prog-id=19 op=LOAD Jan 24 00:53:59.790000 audit: BPF prog-id=19 op=LOAD Jan 24 00:53:59.821201 kernel: audit: type=1334 audit(1769216039.790:135): prog-id=20 op=LOAD Jan 24 00:53:59.790000 audit: BPF prog-id=20 op=LOAD Jan 24 00:53:59.834000 audit: BPF prog-id=21 op=LOAD Jan 24 00:53:59.843485 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:53:59.846225 kernel: audit: type=1334 audit(1769216039.834:136): prog-id=21 op=LOAD Jan 24 00:53:59.857334 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:53:59.863952 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 24 00:53:59.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.892259 kernel: audit: type=1130 audit(1769216039.870:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.892370 kernel: loop3: detected capacity change from 0 to 111560 Jan 24 00:53:59.899000 audit: BPF prog-id=22 op=LOAD Jan 24 00:53:59.903238 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 24 00:53:59.909304 kernel: audit: type=1334 audit(1769216039.899:138): prog-id=22 op=LOAD Jan 24 00:53:59.900000 audit: BPF prog-id=23 op=LOAD Jan 24 00:53:59.900000 audit: BPF prog-id=24 op=LOAD Jan 24 00:53:59.918000 audit: BPF prog-id=25 op=LOAD Jan 24 00:53:59.919000 audit: BPF prog-id=26 op=LOAD Jan 24 00:53:59.921000 audit: BPF prog-id=27 op=LOAD Jan 24 00:53:59.924333 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:53:59.958411 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Jan 24 00:53:59.958429 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Jan 24 00:53:59.969419 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:53:59.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:59.999221 kernel: loop4: detected capacity change from 0 to 50784 Jan 24 00:54:00.003402 systemd-nsresourced[1287]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 24 00:54:00.006577 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 24 00:54:00.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:00.037338 kernel: loop5: detected capacity change from 0 to 229808 Jan 24 00:54:00.039504 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:54:00.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:00.086282 kernel: loop6: detected capacity change from 0 to 111560 Jan 24 00:54:00.119866 (sd-merge)[1292]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 24 00:54:00.130311 (sd-merge)[1292]: Merged extensions into '/usr'. Jan 24 00:54:00.142606 systemd[1]: Reload requested from client PID 1266 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:54:00.142626 systemd[1]: Reloading... Jan 24 00:54:00.181517 systemd-oomd[1282]: No swap; memory pressure usage will be degraded Jan 24 00:54:00.199223 systemd-resolved[1284]: Positive Trust Anchors: Jan 24 00:54:00.199247 systemd-resolved[1284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:54:00.199254 systemd-resolved[1284]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 24 00:54:00.199293 systemd-resolved[1284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:54:00.209623 systemd-resolved[1284]: Defaulting to hostname 'linux'. Jan 24 00:54:00.264312 zram_generator::config[1336]: No configuration found. Jan 24 00:54:00.661448 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:54:00.668736 systemd[1]: Reloading finished in 525 ms. Jan 24 00:54:00.730947 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 24 00:54:00.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:00.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:00.743063 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:54:00.755778 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:54:00.765582 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:54:00.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:00.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:00.796512 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:54:00.854839 systemd[1]: Starting ensure-sysext.service... Jan 24 00:54:00.867695 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:54:00.885000 audit: BPF prog-id=8 op=UNLOAD Jan 24 00:54:00.887000 audit: BPF prog-id=7 op=UNLOAD Jan 24 00:54:00.897000 audit: BPF prog-id=28 op=LOAD Jan 24 00:54:00.897000 audit: BPF prog-id=29 op=LOAD Jan 24 00:54:00.902288 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:54:00.921000 audit: BPF prog-id=30 op=LOAD Jan 24 00:54:00.922000 audit: BPF prog-id=22 op=UNLOAD Jan 24 00:54:00.922000 audit: BPF prog-id=31 op=LOAD Jan 24 00:54:00.922000 audit: BPF prog-id=32 op=LOAD Jan 24 00:54:00.922000 audit: BPF prog-id=23 op=UNLOAD Jan 24 00:54:00.922000 audit: BPF prog-id=24 op=UNLOAD Jan 24 00:54:00.924000 audit: BPF prog-id=33 op=LOAD Jan 24 00:54:00.924000 audit: BPF prog-id=18 op=UNLOAD Jan 24 00:54:00.924000 audit: BPF prog-id=34 op=LOAD Jan 24 00:54:00.924000 audit: BPF prog-id=35 op=LOAD Jan 24 00:54:00.924000 audit: BPF prog-id=19 op=UNLOAD Jan 24 00:54:00.924000 audit: BPF prog-id=20 op=UNLOAD Jan 24 00:54:00.929000 audit: BPF prog-id=36 op=LOAD Jan 24 00:54:00.929000 audit: BPF prog-id=21 op=UNLOAD Jan 24 00:54:00.939000 audit: BPF prog-id=37 op=LOAD Jan 24 00:54:00.939000 audit: BPF prog-id=25 op=UNLOAD Jan 24 00:54:00.940000 audit: BPF prog-id=38 op=LOAD Jan 24 00:54:00.953000 audit: BPF prog-id=39 op=LOAD Jan 24 00:54:00.953000 audit: BPF prog-id=26 op=UNLOAD Jan 24 00:54:00.953000 audit: BPF prog-id=27 op=UNLOAD Jan 24 00:54:00.958000 audit: BPF prog-id=40 op=LOAD Jan 24 00:54:00.958000 audit: BPF prog-id=15 op=UNLOAD Jan 24 00:54:00.958000 audit: BPF prog-id=41 op=LOAD Jan 24 00:54:00.958000 audit: BPF prog-id=42 op=LOAD Jan 24 00:54:00.958000 audit: BPF prog-id=16 op=UNLOAD Jan 24 00:54:00.958000 audit: BPF prog-id=17 op=UNLOAD Jan 24 00:54:00.955758 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 24 00:54:00.955859 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 24 00:54:00.956494 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:54:00.963338 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Jan 24 00:54:00.963514 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Jan 24 00:54:00.975737 systemd[1]: Reload requested from client PID 1373 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:54:00.975813 systemd[1]: Reloading... Jan 24 00:54:01.003920 systemd-tmpfiles[1374]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:54:01.003942 systemd-tmpfiles[1374]: Skipping /boot Jan 24 00:54:01.066627 systemd-udevd[1375]: Using default interface naming scheme 'v257'. Jan 24 00:54:01.088901 systemd-tmpfiles[1374]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:54:01.088981 systemd-tmpfiles[1374]: Skipping /boot Jan 24 00:54:01.155301 zram_generator::config[1404]: No configuration found. Jan 24 00:54:01.740305 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:54:01.788387 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 24 00:54:01.820252 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:54:01.932859 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 24 00:54:01.933890 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 00:54:01.943218 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 00:54:02.031102 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:54:02.041739 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:54:02.042523 systemd[1]: Reloading finished in 1065 ms. Jan 24 00:54:02.062403 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:54:02.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:02.088000 audit: BPF prog-id=43 op=LOAD Jan 24 00:54:02.088000 audit: BPF prog-id=44 op=LOAD Jan 24 00:54:02.088000 audit: BPF prog-id=28 op=UNLOAD Jan 24 00:54:02.088000 audit: BPF prog-id=29 op=UNLOAD Jan 24 00:54:02.091000 audit: BPF prog-id=45 op=LOAD Jan 24 00:54:02.091000 audit: BPF prog-id=37 op=UNLOAD Jan 24 00:54:02.091000 audit: BPF prog-id=46 op=LOAD Jan 24 00:54:02.091000 audit: BPF prog-id=47 op=LOAD Jan 24 00:54:02.091000 audit: BPF prog-id=38 op=UNLOAD Jan 24 00:54:02.091000 audit: BPF prog-id=39 op=UNLOAD Jan 24 00:54:02.110000 audit: BPF prog-id=48 op=LOAD Jan 24 00:54:02.110000 audit: BPF prog-id=30 op=UNLOAD Jan 24 00:54:02.110000 audit: BPF prog-id=49 op=LOAD Jan 24 00:54:02.111000 audit: BPF prog-id=50 op=LOAD Jan 24 00:54:02.111000 audit: BPF prog-id=31 op=UNLOAD Jan 24 00:54:02.111000 audit: BPF prog-id=32 op=UNLOAD Jan 24 00:54:02.114000 audit: BPF prog-id=51 op=LOAD Jan 24 00:54:02.114000 audit: BPF prog-id=40 op=UNLOAD Jan 24 00:54:02.117000 audit: BPF prog-id=52 op=LOAD Jan 24 00:54:02.117000 audit: BPF prog-id=53 op=LOAD Jan 24 00:54:02.117000 audit: BPF prog-id=41 op=UNLOAD Jan 24 00:54:02.117000 audit: BPF prog-id=42 op=UNLOAD Jan 24 00:54:02.120000 audit: BPF prog-id=54 op=LOAD Jan 24 00:54:02.120000 audit: BPF prog-id=36 op=UNLOAD Jan 24 00:54:02.125000 audit: BPF prog-id=55 op=LOAD Jan 24 00:54:02.125000 audit: BPF prog-id=33 op=UNLOAD Jan 24 00:54:02.126000 audit: BPF prog-id=56 op=LOAD Jan 24 00:54:02.126000 audit: BPF prog-id=57 op=LOAD Jan 24 00:54:02.126000 audit: BPF prog-id=34 op=UNLOAD Jan 24 00:54:02.126000 audit: BPF prog-id=35 op=UNLOAD Jan 24 00:54:02.142235 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:54:02.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:02.294445 kernel: hrtimer: interrupt took 8882383 ns Jan 24 00:54:02.375549 systemd[1]: Finished ensure-sysext.service. Jan 24 00:54:02.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:02.433279 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:54:02.437765 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 24 00:54:02.447214 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:54:02.455950 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:54:02.463092 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:54:02.517535 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:54:02.538939 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:54:02.572653 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:54:02.603797 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:54:02.604728 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 24 00:54:02.641550 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:54:02.664541 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:54:02.697307 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 24 00:54:02.718479 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:54:02.742000 audit: BPF prog-id=58 op=LOAD Jan 24 00:54:02.748706 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:54:02.766000 audit: BPF prog-id=59 op=LOAD Jan 24 00:54:02.803616 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 00:54:02.838350 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:54:02.865878 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:54:02.866714 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:54:02.874273 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:54:02.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:02.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:02.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:02.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:02.886665 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:54:02.887623 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:54:02.887983 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:54:02.900486 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:54:02.900941 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:54:02.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:02.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:02.928387 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:54:02.944432 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:54:02.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:02.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:02.964879 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:54:02.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:02.995096 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:54:02.995335 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:54:03.020000 audit[1513]: SYSTEM_BOOT pid=1513 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 24 00:54:03.104762 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:54:03.110239 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:54:03.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:03.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:03.254000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 24 00:54:03.254000 audit[1535]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffdeb7cbc0 a2=420 a3=0 items=0 ppid=1489 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:03.254000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 00:54:03.255418 augenrules[1535]: No rules Jan 24 00:54:03.271904 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:54:03.289561 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 24 00:54:03.332991 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:54:03.334496 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:54:03.402339 systemd-networkd[1507]: lo: Link UP Jan 24 00:54:03.402356 systemd-networkd[1507]: lo: Gained carrier Jan 24 00:54:03.416935 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:54:03.433761 systemd[1]: Reached target network.target - Network. Jan 24 00:54:03.435812 systemd-networkd[1507]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 00:54:03.436287 systemd-networkd[1507]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:54:03.452857 systemd-networkd[1507]: eth0: Link UP Jan 24 00:54:03.460255 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 24 00:54:03.460506 systemd-networkd[1507]: eth0: Gained carrier Jan 24 00:54:03.460551 systemd-networkd[1507]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 00:54:03.494680 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:54:03.507866 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:54:03.573746 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 00:54:03.599243 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:54:03.599330 systemd-networkd[1507]: eth0: DHCPv4 address 10.0.0.104/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:54:03.622374 systemd-timesyncd[1510]: Network configuration changed, trying to establish connection. Jan 24 00:54:04.481022 systemd-resolved[1284]: Clock change detected. Flushing caches. Jan 24 00:54:04.482941 systemd-timesyncd[1510]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 24 00:54:04.483203 systemd-timesyncd[1510]: Initial clock synchronization to Sat 2026-01-24 00:54:04.480400 UTC. Jan 24 00:54:04.524369 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 24 00:54:04.678130 kernel: kvm_amd: TSC scaling supported Jan 24 00:54:04.678357 kernel: kvm_amd: Nested Virtualization enabled Jan 24 00:54:04.678382 kernel: kvm_amd: Nested Paging enabled Jan 24 00:54:04.683176 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 24 00:54:04.692900 kernel: kvm_amd: PMU virtualization is disabled Jan 24 00:54:05.217818 kernel: EDAC MC: Ver: 3.0.0 Jan 24 00:54:05.790247 ldconfig[1501]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:54:05.841534 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:54:05.861390 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:54:05.953948 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:54:05.969261 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:54:05.982591 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:54:05.996680 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:54:06.011529 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 24 00:54:06.028103 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:54:06.053375 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:54:06.070107 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 24 00:54:06.089222 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 24 00:54:06.105015 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:54:06.113996 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:54:06.114119 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:54:06.142570 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:54:06.166228 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:54:06.176365 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:54:06.188098 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 24 00:54:06.202921 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 24 00:54:06.219972 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 24 00:54:06.273811 systemd-networkd[1507]: eth0: Gained IPv6LL Jan 24 00:54:06.274408 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:54:06.284526 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 24 00:54:06.308163 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:54:06.326404 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:54:06.348222 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:54:06.357981 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:54:06.368101 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:54:06.374105 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:54:06.374271 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:54:06.378593 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:54:06.395887 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 24 00:54:06.415960 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:54:06.445018 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:54:06.458945 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:54:06.471030 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:54:06.478563 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:54:06.486466 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 24 00:54:06.492039 jq[1560]: false Jan 24 00:54:06.502175 extend-filesystems[1561]: Found /dev/vda6 Jan 24 00:54:06.504176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:54:06.506478 oslogin_cache_refresh[1562]: Refreshing passwd entry cache Jan 24 00:54:06.507594 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Refreshing passwd entry cache Jan 24 00:54:06.523103 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:54:06.542870 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Failure getting users, quitting Jan 24 00:54:06.542870 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 24 00:54:06.542870 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Refreshing group entry cache Jan 24 00:54:06.542342 oslogin_cache_refresh[1562]: Failure getting users, quitting Jan 24 00:54:06.542377 oslogin_cache_refresh[1562]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 24 00:54:06.542449 oslogin_cache_refresh[1562]: Refreshing group entry cache Jan 24 00:54:06.547452 extend-filesystems[1561]: Found /dev/vda9 Jan 24 00:54:06.556312 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:54:06.566039 oslogin_cache_refresh[1562]: Failure getting groups, quitting Jan 24 00:54:06.569400 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Failure getting groups, quitting Jan 24 00:54:06.569400 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 24 00:54:06.566068 oslogin_cache_refresh[1562]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 24 00:54:06.584854 extend-filesystems[1561]: Checking size of /dev/vda9 Jan 24 00:54:06.590657 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:54:06.622210 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:54:06.650844 extend-filesystems[1561]: Resized partition /dev/vda9 Jan 24 00:54:06.649995 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:54:06.681323 extend-filesystems[1584]: resize2fs 1.47.3 (8-Jul-2025) Jan 24 00:54:06.713281 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 24 00:54:06.682188 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:54:06.690825 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:54:06.692106 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:54:06.699573 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:54:06.756699 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:54:06.793964 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:54:06.808515 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:54:06.872686 jq[1593]: true Jan 24 00:54:06.873028 update_engine[1588]: I20260124 00:54:06.867576 1588 main.cc:92] Flatcar Update Engine starting Jan 24 00:54:06.812051 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:54:06.814236 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 24 00:54:06.817070 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 24 00:54:06.831235 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:54:06.847302 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:54:06.861266 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:54:06.870592 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:54:06.871242 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:54:06.958929 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 24 00:54:06.998301 extend-filesystems[1584]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 24 00:54:06.998301 extend-filesystems[1584]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 24 00:54:06.998301 extend-filesystems[1584]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 24 00:54:07.074232 extend-filesystems[1561]: Resized filesystem in /dev/vda9 Jan 24 00:54:07.015145 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:54:07.016484 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:54:07.108299 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 24 00:54:07.109335 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 24 00:54:07.148504 jq[1608]: true Jan 24 00:54:07.160536 tar[1606]: linux-amd64/LICENSE Jan 24 00:54:07.160536 tar[1606]: linux-amd64/helm Jan 24 00:54:07.179334 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:54:07.265400 dbus-daemon[1558]: [system] SELinux support is enabled Jan 24 00:54:07.265537 systemd-logind[1585]: Watching system buttons on /dev/input/event2 (Power Button) Jan 24 00:54:07.265574 systemd-logind[1585]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:54:07.266414 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:54:07.283360 update_engine[1588]: I20260124 00:54:07.282853 1588 update_check_scheduler.cc:74] Next update check in 4m41s Jan 24 00:54:07.287444 systemd-logind[1585]: New seat seat0. Jan 24 00:54:07.288560 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:54:07.289677 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:54:07.304442 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:54:07.304534 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:54:07.318697 dbus-daemon[1558]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 24 00:54:07.322364 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:54:07.352448 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:54:07.395449 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:54:07.405531 bash[1648]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:54:07.486097 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:54:07.524172 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 24 00:54:07.768684 containerd[1609]: time="2026-01-24T00:54:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 24 00:54:07.768684 containerd[1609]: time="2026-01-24T00:54:07.759882124Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 24 00:54:07.799974 containerd[1609]: time="2026-01-24T00:54:07.799903857Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.458µs" Jan 24 00:54:07.806812 containerd[1609]: time="2026-01-24T00:54:07.804550142Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 24 00:54:07.806812 containerd[1609]: time="2026-01-24T00:54:07.804888162Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 24 00:54:07.806812 containerd[1609]: time="2026-01-24T00:54:07.804908330Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 24 00:54:07.806812 containerd[1609]: time="2026-01-24T00:54:07.805315680Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 24 00:54:07.806812 containerd[1609]: time="2026-01-24T00:54:07.806343840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 24 00:54:07.806812 containerd[1609]: time="2026-01-24T00:54:07.806509910Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 24 00:54:07.806812 containerd[1609]: time="2026-01-24T00:54:07.806528815Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 24 00:54:07.807392 containerd[1609]: time="2026-01-24T00:54:07.807363354Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 24 00:54:07.807477 containerd[1609]: time="2026-01-24T00:54:07.807453412Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 24 00:54:07.807587 containerd[1609]: time="2026-01-24T00:54:07.807559079Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 24 00:54:07.807802 containerd[1609]: time="2026-01-24T00:54:07.807782025Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 24 00:54:07.811463 containerd[1609]: time="2026-01-24T00:54:07.811324609Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 24 00:54:07.813546 locksmithd[1649]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:54:07.815994 containerd[1609]: time="2026-01-24T00:54:07.814475472Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 24 00:54:07.817221 containerd[1609]: time="2026-01-24T00:54:07.817117734Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 24 00:54:07.817871 containerd[1609]: time="2026-01-24T00:54:07.817594564Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 24 00:54:07.818003 containerd[1609]: time="2026-01-24T00:54:07.817918469Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 24 00:54:07.818003 containerd[1609]: time="2026-01-24T00:54:07.817979563Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 24 00:54:07.818180 containerd[1609]: time="2026-01-24T00:54:07.818032652Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 24 00:54:07.818402 containerd[1609]: time="2026-01-24T00:54:07.818312956Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 24 00:54:07.818600 containerd[1609]: time="2026-01-24T00:54:07.818508310Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:54:07.849046 sshd_keygen[1591]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:54:07.853293 containerd[1609]: time="2026-01-24T00:54:07.853155785Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 24 00:54:07.855197 containerd[1609]: time="2026-01-24T00:54:07.854569224Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 24 00:54:07.856853 containerd[1609]: time="2026-01-24T00:54:07.855910999Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 24 00:54:07.856853 containerd[1609]: time="2026-01-24T00:54:07.855950893Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 24 00:54:07.856853 containerd[1609]: time="2026-01-24T00:54:07.855984095Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 24 00:54:07.856853 containerd[1609]: time="2026-01-24T00:54:07.856007309Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 24 00:54:07.856853 containerd[1609]: time="2026-01-24T00:54:07.856020363Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 24 00:54:07.856853 containerd[1609]: time="2026-01-24T00:54:07.856030592Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 24 00:54:07.856853 containerd[1609]: time="2026-01-24T00:54:07.856043436Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 24 00:54:07.856853 containerd[1609]: time="2026-01-24T00:54:07.856055349Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 24 00:54:07.856853 containerd[1609]: time="2026-01-24T00:54:07.856067081Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 24 00:54:07.856853 containerd[1609]: time="2026-01-24T00:54:07.856077680Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 24 00:54:07.856853 containerd[1609]: time="2026-01-24T00:54:07.856087158Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 24 00:54:07.856853 containerd[1609]: time="2026-01-24T00:54:07.856101635Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 24 00:54:07.856853 containerd[1609]: time="2026-01-24T00:54:07.856354056Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 24 00:54:07.857314 containerd[1609]: time="2026-01-24T00:54:07.856377470Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 24 00:54:07.857314 containerd[1609]: time="2026-01-24T00:54:07.856394502Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 24 00:54:07.857314 containerd[1609]: time="2026-01-24T00:54:07.856405452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 24 00:54:07.857403 containerd[1609]: time="2026-01-24T00:54:07.856552908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 24 00:54:07.857469 containerd[1609]: time="2026-01-24T00:54:07.857450082Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 24 00:54:07.857531 containerd[1609]: time="2026-01-24T00:54:07.857517759Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 24 00:54:07.857581 containerd[1609]: time="2026-01-24T00:54:07.857568944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 24 00:54:07.857835 containerd[1609]: time="2026-01-24T00:54:07.857809624Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 24 00:54:07.858130 containerd[1609]: time="2026-01-24T00:54:07.857966656Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 24 00:54:07.858511 containerd[1609]: time="2026-01-24T00:54:07.858343239Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 24 00:54:07.859279 containerd[1609]: time="2026-01-24T00:54:07.859158682Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 24 00:54:07.859680 containerd[1609]: time="2026-01-24T00:54:07.859656031Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 24 00:54:07.860102 containerd[1609]: time="2026-01-24T00:54:07.859820097Z" level=info msg="Start snapshots syncer" Jan 24 00:54:07.860595 containerd[1609]: time="2026-01-24T00:54:07.860474649Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 24 00:54:07.862795 containerd[1609]: time="2026-01-24T00:54:07.861803209Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 24 00:54:07.862795 containerd[1609]: time="2026-01-24T00:54:07.862163613Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 24 00:54:07.863059 containerd[1609]: time="2026-01-24T00:54:07.862443635Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 24 00:54:07.863059 containerd[1609]: time="2026-01-24T00:54:07.863005064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 24 00:54:07.863059 containerd[1609]: time="2026-01-24T00:54:07.863053113Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 24 00:54:07.863889 containerd[1609]: time="2026-01-24T00:54:07.863326794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 24 00:54:07.863889 containerd[1609]: time="2026-01-24T00:54:07.863523031Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 24 00:54:07.863889 containerd[1609]: time="2026-01-24T00:54:07.863567674Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 24 00:54:07.863889 containerd[1609]: time="2026-01-24T00:54:07.863588773Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 24 00:54:07.863889 containerd[1609]: time="2026-01-24T00:54:07.863675716Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 24 00:54:07.864561 containerd[1609]: time="2026-01-24T00:54:07.863700933Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 24 00:54:07.864982 containerd[1609]: time="2026-01-24T00:54:07.864957448Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 24 00:54:07.867861 containerd[1609]: time="2026-01-24T00:54:07.866581681Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 24 00:54:07.867861 containerd[1609]: time="2026-01-24T00:54:07.867973396Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 24 00:54:07.867861 containerd[1609]: time="2026-01-24T00:54:07.868002209Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 24 00:54:07.867861 containerd[1609]: time="2026-01-24T00:54:07.868019572Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 24 00:54:07.867861 containerd[1609]: time="2026-01-24T00:54:07.868036483Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 24 00:54:07.867861 containerd[1609]: time="2026-01-24T00:54:07.868051761Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 24 00:54:07.867861 containerd[1609]: time="2026-01-24T00:54:07.868077790Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 24 00:54:07.867861 containerd[1609]: time="2026-01-24T00:54:07.868098118Z" level=info msg="runtime interface created" Jan 24 00:54:07.867861 containerd[1609]: time="2026-01-24T00:54:07.868105913Z" level=info msg="created NRI interface" Jan 24 00:54:07.867861 containerd[1609]: time="2026-01-24T00:54:07.868117184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 24 00:54:07.867861 containerd[1609]: time="2026-01-24T00:54:07.868142010Z" level=info msg="Connect containerd service" Jan 24 00:54:07.867861 containerd[1609]: time="2026-01-24T00:54:07.868198265Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:54:07.877929 containerd[1609]: time="2026-01-24T00:54:07.876872800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:54:07.971498 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:54:08.009898 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:54:08.118195 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:54:08.118952 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:54:08.171479 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:54:08.184204 containerd[1609]: time="2026-01-24T00:54:08.183796929Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:54:08.184204 containerd[1609]: time="2026-01-24T00:54:08.183911823Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:54:08.184204 containerd[1609]: time="2026-01-24T00:54:08.183955655Z" level=info msg="Start subscribing containerd event" Jan 24 00:54:08.184204 containerd[1609]: time="2026-01-24T00:54:08.183995780Z" level=info msg="Start recovering state" Jan 24 00:54:08.184479 containerd[1609]: time="2026-01-24T00:54:08.184328872Z" level=info msg="Start event monitor" Jan 24 00:54:08.184479 containerd[1609]: time="2026-01-24T00:54:08.184349591Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:54:08.184479 containerd[1609]: time="2026-01-24T00:54:08.184359209Z" level=info msg="Start streaming server" Jan 24 00:54:08.184479 containerd[1609]: time="2026-01-24T00:54:08.184370991Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 24 00:54:08.184479 containerd[1609]: time="2026-01-24T00:54:08.184385047Z" level=info msg="runtime interface starting up..." Jan 24 00:54:08.184479 containerd[1609]: time="2026-01-24T00:54:08.184392671Z" level=info msg="starting plugins..." Jan 24 00:54:08.184479 containerd[1609]: time="2026-01-24T00:54:08.184414692Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 24 00:54:08.200687 containerd[1609]: time="2026-01-24T00:54:08.189281879Z" level=info msg="containerd successfully booted in 0.443760s" Jan 24 00:54:08.192175 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:54:08.278851 tar[1606]: linux-amd64/README.md Jan 24 00:54:08.299986 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:54:08.346584 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:54:08.363000 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:54:08.374267 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:54:08.395535 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:54:10.751420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:54:10.790582 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:54:10.797360 systemd[1]: Startup finished in 9.979s (kernel) + 12.892s (initrd) + 13.977s (userspace) = 36.849s. Jan 24 00:54:10.821972 (kubelet)[1695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:54:13.325521 kubelet[1695]: E0124 00:54:13.324591 1695 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:54:13.352432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:54:13.354175 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:54:13.358462 systemd[1]: kubelet.service: Consumed 2.375s CPU time, 270.5M memory peak. Jan 24 00:54:14.674405 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:54:14.683901 systemd[1]: Started sshd@0-10.0.0.104:22-10.0.0.1:40792.service - OpenSSH per-connection server daemon (10.0.0.1:40792). Jan 24 00:54:15.062049 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 40792 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 00:54:15.071216 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:15.104408 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:54:15.118923 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:54:15.155880 systemd-logind[1585]: New session 1 of user core. Jan 24 00:54:15.244969 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:54:15.274164 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:54:15.399249 (systemd)[1715]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:15.425987 systemd-logind[1585]: New session 2 of user core. Jan 24 00:54:18.163998 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1208833105 wd_nsec: 1208832840 Jan 24 00:54:18.600428 systemd[1715]: Queued start job for default target default.target. Jan 24 00:54:18.626867 systemd[1715]: Created slice app.slice - User Application Slice. Jan 24 00:54:18.626959 systemd[1715]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 24 00:54:18.626980 systemd[1715]: Reached target paths.target - Paths. Jan 24 00:54:18.627119 systemd[1715]: Reached target timers.target - Timers. Jan 24 00:54:18.659111 systemd[1715]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:54:18.664220 systemd[1715]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 24 00:54:18.768566 systemd[1715]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 24 00:54:18.821290 systemd[1715]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:54:18.823265 systemd[1715]: Reached target sockets.target - Sockets. Jan 24 00:54:18.823397 systemd[1715]: Reached target basic.target - Basic System. Jan 24 00:54:18.823471 systemd[1715]: Reached target default.target - Main User Target. Jan 24 00:54:18.823527 systemd[1715]: Startup finished in 626ms. Jan 24 00:54:18.824971 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:54:18.847874 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:54:18.914171 systemd[1]: Started sshd@1-10.0.0.104:22-10.0.0.1:40794.service - OpenSSH per-connection server daemon (10.0.0.1:40794). Jan 24 00:54:19.156955 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 40794 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 00:54:19.160394 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:19.197882 systemd-logind[1585]: New session 3 of user core. Jan 24 00:54:19.214180 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:54:19.365517 sshd[1733]: Connection closed by 10.0.0.1 port 40794 Jan 24 00:54:19.367419 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:19.390266 systemd[1]: sshd@1-10.0.0.104:22-10.0.0.1:40794.service: Deactivated successfully. Jan 24 00:54:19.398311 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:54:19.407408 systemd-logind[1585]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:54:19.413462 systemd[1]: Started sshd@2-10.0.0.104:22-10.0.0.1:40810.service - OpenSSH per-connection server daemon (10.0.0.1:40810). Jan 24 00:54:19.417284 systemd-logind[1585]: Removed session 3. Jan 24 00:54:19.621570 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 40810 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 00:54:19.623198 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:19.651437 systemd-logind[1585]: New session 4 of user core. Jan 24 00:54:19.678945 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:54:19.716891 sshd[1743]: Connection closed by 10.0.0.1 port 40810 Jan 24 00:54:19.717470 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:19.756352 systemd[1]: sshd@2-10.0.0.104:22-10.0.0.1:40810.service: Deactivated successfully. Jan 24 00:54:19.763512 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:54:19.767409 systemd-logind[1585]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:54:19.784910 systemd[1]: Started sshd@3-10.0.0.104:22-10.0.0.1:40816.service - OpenSSH per-connection server daemon (10.0.0.1:40816). Jan 24 00:54:19.788003 systemd-logind[1585]: Removed session 4. Jan 24 00:54:19.963458 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 40816 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 00:54:19.965931 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:19.982100 systemd-logind[1585]: New session 5 of user core. Jan 24 00:54:20.002854 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:54:20.062958 sshd[1753]: Connection closed by 10.0.0.1 port 40816 Jan 24 00:54:20.066028 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:20.094358 systemd[1]: sshd@3-10.0.0.104:22-10.0.0.1:40816.service: Deactivated successfully. Jan 24 00:54:20.100258 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:54:20.105325 systemd-logind[1585]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:54:20.114476 systemd[1]: Started sshd@4-10.0.0.104:22-10.0.0.1:40830.service - OpenSSH per-connection server daemon (10.0.0.1:40830). Jan 24 00:54:20.118116 systemd-logind[1585]: Removed session 5. Jan 24 00:54:20.287992 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 40830 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 00:54:20.295959 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:20.318162 systemd-logind[1585]: New session 6 of user core. Jan 24 00:54:20.327688 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:54:20.425323 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:54:20.427361 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:54:20.454076 sudo[1765]: pam_unix(sudo:session): session closed for user root Jan 24 00:54:20.458953 sshd[1764]: Connection closed by 10.0.0.1 port 40830 Jan 24 00:54:20.460085 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:20.484304 systemd[1]: sshd@4-10.0.0.104:22-10.0.0.1:40830.service: Deactivated successfully. Jan 24 00:54:20.489346 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:54:20.492190 systemd-logind[1585]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:54:20.498358 systemd[1]: Started sshd@5-10.0.0.104:22-10.0.0.1:40832.service - OpenSSH per-connection server daemon (10.0.0.1:40832). Jan 24 00:54:20.507072 systemd-logind[1585]: Removed session 6. Jan 24 00:54:20.691073 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 40832 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 00:54:20.703338 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:20.751949 systemd-logind[1585]: New session 7 of user core. Jan 24 00:54:20.772297 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:54:20.891081 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:54:20.891853 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:54:20.907145 sudo[1778]: pam_unix(sudo:session): session closed for user root Jan 24 00:54:20.954861 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 24 00:54:20.955586 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:54:21.019140 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 24 00:54:21.198000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 24 00:54:21.202599 augenrules[1802]: No rules Jan 24 00:54:21.206590 kernel: kauditd_printk_skb: 92 callbacks suppressed Jan 24 00:54:21.206851 kernel: audit: type=1305 audit(1769216061.198:229): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 24 00:54:21.205486 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:54:21.206365 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 24 00:54:21.212054 sudo[1777]: pam_unix(sudo:session): session closed for user root Jan 24 00:54:21.198000 audit[1802]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffdf614bc0 a2=420 a3=0 items=0 ppid=1783 pid=1802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:21.224130 kernel: audit: type=1300 audit(1769216061.198:229): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffdf614bc0 a2=420 a3=0 items=0 ppid=1783 pid=1802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:21.225457 sshd[1776]: Connection closed by 10.0.0.1 port 40832 Jan 24 00:54:21.227245 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:21.257764 kernel: audit: type=1327 audit(1769216061.198:229): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 00:54:21.198000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 00:54:21.270772 kernel: audit: type=1130 audit(1769216061.207:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:21.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:21.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:21.308779 kernel: audit: type=1131 audit(1769216061.207:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:21.308878 kernel: audit: type=1106 audit(1769216061.210:232): pid=1777 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:54:21.210000 audit[1777]: USER_END pid=1777 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:54:21.210000 audit[1777]: CRED_DISP pid=1777 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:54:21.333965 kernel: audit: type=1104 audit(1769216061.210:233): pid=1777 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:54:21.229000 audit[1772]: USER_END pid=1772 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:54:21.359100 kernel: audit: type=1106 audit(1769216061.229:234): pid=1772 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:54:21.229000 audit[1772]: CRED_DISP pid=1772 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:54:21.404053 kernel: audit: type=1104 audit(1769216061.229:235): pid=1772 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:54:21.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.104:22-10.0.0.1:40832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:21.424409 systemd[1]: sshd@5-10.0.0.104:22-10.0.0.1:40832.service: Deactivated successfully. Jan 24 00:54:21.458009 kernel: audit: type=1131 audit(1769216061.424:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.104:22-10.0.0.1:40832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:21.467227 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:54:21.474458 systemd-logind[1585]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:54:21.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.104:22-10.0.0.1:40834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:21.489059 systemd[1]: Started sshd@6-10.0.0.104:22-10.0.0.1:40834.service - OpenSSH per-connection server daemon (10.0.0.1:40834). Jan 24 00:54:21.490454 systemd-logind[1585]: Removed session 7. Jan 24 00:54:21.644000 audit[1811]: USER_ACCT pid=1811 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:54:21.646816 sshd[1811]: Accepted publickey for core from 10.0.0.1 port 40834 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 00:54:21.654238 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:21.652000 audit[1811]: CRED_ACQ pid=1811 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:54:21.652000 audit[1811]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc21d45010 a2=3 a3=0 items=0 ppid=1 pid=1811 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:21.652000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:54:21.684942 systemd-logind[1585]: New session 8 of user core. Jan 24 00:54:21.707196 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:54:21.724000 audit[1811]: USER_START pid=1811 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:54:21.736000 audit[1815]: CRED_ACQ pid=1815 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:54:21.776000 audit[1816]: USER_ACCT pid=1816 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:54:21.778000 audit[1816]: CRED_REFR pid=1816 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:54:21.778000 audit[1816]: USER_START pid=1816 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:54:21.779255 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:54:21.780048 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:54:22.899084 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:54:22.927213 (dockerd)[1838]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:54:23.605469 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:54:23.906063 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:54:25.194114 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:54:25.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:25.217027 (kubelet)[1853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:54:25.572017 dockerd[1838]: time="2026-01-24T00:54:25.570558047Z" level=info msg="Starting up" Jan 24 00:54:25.573974 dockerd[1838]: time="2026-01-24T00:54:25.573866183Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 24 00:54:26.213863 dockerd[1838]: time="2026-01-24T00:54:26.211871638Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 24 00:54:26.296575 kubelet[1853]: E0124 00:54:26.294500 1853 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:54:26.321210 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:54:26.321492 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:54:26.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:54:26.333420 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 24 00:54:26.333482 kernel: audit: type=1131 audit(1769216066.324:247): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:54:26.325133 systemd[1]: kubelet.service: Consumed 1.926s CPU time, 110.6M memory peak. Jan 24 00:54:26.453915 dockerd[1838]: time="2026-01-24T00:54:26.452331954Z" level=info msg="Loading containers: start." Jan 24 00:54:26.972989 kernel: Initializing XFRM netlink socket Jan 24 00:54:27.299000 audit[1907]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:27.299000 audit[1907]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffef5030c0 a2=0 a3=0 items=0 ppid=1838 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.345889 kernel: audit: type=1325 audit(1769216067.299:248): table=nat:2 family=2 entries=2 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:27.346010 kernel: audit: type=1300 audit(1769216067.299:248): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffef5030c0 a2=0 a3=0 items=0 ppid=1838 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.346060 kernel: audit: type=1327 audit(1769216067.299:248): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 24 00:54:27.299000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 24 00:54:27.355572 kernel: audit: type=1325 audit(1769216067.311:249): table=filter:3 family=2 entries=2 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:27.311000 audit[1909]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:27.367777 kernel: audit: type=1300 audit(1769216067.311:249): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdc5b952d0 a2=0 a3=0 items=0 ppid=1838 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.311000 audit[1909]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdc5b952d0 a2=0 a3=0 items=0 ppid=1838 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.388533 kernel: audit: type=1327 audit(1769216067.311:249): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 24 00:54:27.311000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 24 00:54:27.322000 audit[1911]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:27.322000 audit[1911]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8a3dff60 a2=0 a3=0 items=0 ppid=1838 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.434409 kernel: audit: type=1325 audit(1769216067.322:250): table=filter:4 family=2 entries=1 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:27.434540 kernel: audit: type=1300 audit(1769216067.322:250): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8a3dff60 a2=0 a3=0 items=0 ppid=1838 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.434599 kernel: audit: type=1327 audit(1769216067.322:250): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 24 00:54:27.322000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 24 00:54:27.338000 audit[1913]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:27.338000 audit[1913]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfbcb33e0 a2=0 a3=0 items=0 ppid=1838 pid=1913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.338000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 24 00:54:27.344000 audit[1915]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:27.344000 audit[1915]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcca77a730 a2=0 a3=0 items=0 ppid=1838 pid=1915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.344000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 24 00:54:27.354000 audit[1917]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1917 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:27.354000 audit[1917]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff7bb23180 a2=0 a3=0 items=0 ppid=1838 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.354000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 00:54:27.366000 audit[1919]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:27.366000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc9283da10 a2=0 a3=0 items=0 ppid=1838 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.366000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 24 00:54:27.372000 audit[1921]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:27.372000 audit[1921]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffcb9da5940 a2=0 a3=0 items=0 ppid=1838 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.372000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 24 00:54:27.546000 audit[1924]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:27.546000 audit[1924]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffe71779a70 a2=0 a3=0 items=0 ppid=1838 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.546000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 24 00:54:27.558000 audit[1926]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1926 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:27.558000 audit[1926]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffddc7905d0 a2=0 a3=0 items=0 ppid=1838 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.558000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 24 00:54:27.565000 audit[1928]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1928 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:27.565000 audit[1928]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffef83dd230 a2=0 a3=0 items=0 ppid=1838 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.565000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 24 00:54:27.574000 audit[1930]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:27.574000 audit[1930]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffef8083f60 a2=0 a3=0 items=0 ppid=1838 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.574000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 00:54:27.585000 audit[1932]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:27.585000 audit[1932]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffcddce0dc0 a2=0 a3=0 items=0 ppid=1838 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.585000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 24 00:54:27.748000 audit[1962]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:27.748000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc5f26f940 a2=0 a3=0 items=0 ppid=1838 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.748000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 24 00:54:27.757000 audit[1964]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:27.757000 audit[1964]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd7c3e6f10 a2=0 a3=0 items=0 ppid=1838 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.757000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 24 00:54:27.767000 audit[1966]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:27.767000 audit[1966]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6e61c010 a2=0 a3=0 items=0 ppid=1838 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.767000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 24 00:54:27.779000 audit[1968]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:27.779000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffeba42220 a2=0 a3=0 items=0 ppid=1838 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.779000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 24 00:54:27.796000 audit[1970]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:27.796000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff81b34bd0 a2=0 a3=0 items=0 ppid=1838 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.796000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 24 00:54:27.808000 audit[1972]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:27.808000 audit[1972]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd01b50ad0 a2=0 a3=0 items=0 ppid=1838 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.808000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 00:54:27.821000 audit[1974]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:27.821000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe132fbeb0 a2=0 a3=0 items=0 ppid=1838 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.821000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 24 00:54:27.831000 audit[1976]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:27.831000 audit[1976]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe7b6be1d0 a2=0 a3=0 items=0 ppid=1838 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.831000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 24 00:54:27.845000 audit[1978]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:27.845000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffdc54a4120 a2=0 a3=0 items=0 ppid=1838 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.845000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 24 00:54:27.855000 audit[1980]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:27.855000 audit[1980]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd7b5f6250 a2=0 a3=0 items=0 ppid=1838 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.855000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 24 00:54:27.863000 audit[1982]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:27.863000 audit[1982]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff3621f760 a2=0 a3=0 items=0 ppid=1838 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.863000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 24 00:54:27.874000 audit[1984]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:27.874000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff86c27360 a2=0 a3=0 items=0 ppid=1838 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.874000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 00:54:27.888000 audit[1986]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1986 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:27.888000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffddde9da20 a2=0 a3=0 items=0 ppid=1838 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.888000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 24 00:54:27.914000 audit[1991]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1991 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:27.914000 audit[1991]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff95705930 a2=0 a3=0 items=0 ppid=1838 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.914000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 24 00:54:27.927000 audit[1993]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1993 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:27.927000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff8dd720c0 a2=0 a3=0 items=0 ppid=1838 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.927000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 24 00:54:27.939000 audit[1995]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1995 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:27.939000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd370cc880 a2=0 a3=0 items=0 ppid=1838 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.939000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 24 00:54:27.950000 audit[1997]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:27.950000 audit[1997]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff0b4a0f80 a2=0 a3=0 items=0 ppid=1838 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.950000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 24 00:54:27.962000 audit[1999]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:27.962000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffda7ec8890 a2=0 a3=0 items=0 ppid=1838 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.962000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 24 00:54:27.987000 audit[2001]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:27.987000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcde007d20 a2=0 a3=0 items=0 ppid=1838 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.987000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 24 00:54:28.068000 audit[2006]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2006 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:28.068000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffd78e73460 a2=0 a3=0 items=0 ppid=1838 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:28.068000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 24 00:54:28.079000 audit[2008]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2008 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:28.079000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fffe8a29fa0 a2=0 a3=0 items=0 ppid=1838 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:28.079000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 24 00:54:28.112000 audit[2016]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:28.112000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffe927fdc00 a2=0 a3=0 items=0 ppid=1838 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:28.112000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 24 00:54:28.146000 audit[2022]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:28.146000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd68416f50 a2=0 a3=0 items=0 ppid=1838 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:28.146000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 24 00:54:28.157000 audit[2024]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:28.157000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffd54d414d0 a2=0 a3=0 items=0 ppid=1838 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:28.157000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 24 00:54:28.171000 audit[2026]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:28.171000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdbe298c00 a2=0 a3=0 items=0 ppid=1838 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:28.171000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 24 00:54:28.182000 audit[2028]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:28.182000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc0c34ffa0 a2=0 a3=0 items=0 ppid=1838 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:28.182000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 24 00:54:28.190000 audit[2030]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:28.190000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff4a9f0050 a2=0 a3=0 items=0 ppid=1838 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:28.190000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 24 00:54:28.194061 systemd-networkd[1507]: docker0: Link UP Jan 24 00:54:28.211863 dockerd[1838]: time="2026-01-24T00:54:28.210139057Z" level=info msg="Loading containers: done." Jan 24 00:54:28.282009 dockerd[1838]: time="2026-01-24T00:54:28.281619181Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:54:28.282265 dockerd[1838]: time="2026-01-24T00:54:28.282232767Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 24 00:54:28.283353 dockerd[1838]: time="2026-01-24T00:54:28.282479478Z" level=info msg="Initializing buildkit" Jan 24 00:54:28.428466 dockerd[1838]: time="2026-01-24T00:54:28.428123534Z" level=info msg="Completed buildkit initialization" Jan 24 00:54:28.458950 dockerd[1838]: time="2026-01-24T00:54:28.458596951Z" level=info msg="Daemon has completed initialization" Jan 24 00:54:28.460123 dockerd[1838]: time="2026-01-24T00:54:28.459016562Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:54:28.459974 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:54:28.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:31.620096 containerd[1609]: time="2026-01-24T00:54:31.619107746Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 24 00:54:33.851874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1576434654.mount: Deactivated successfully. Jan 24 00:54:36.352677 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 00:54:36.361089 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:54:37.158657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:54:37.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:37.166909 kernel: kauditd_printk_skb: 112 callbacks suppressed Jan 24 00:54:37.166997 kernel: audit: type=1130 audit(1769216077.158:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:37.211544 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:54:38.213257 kubelet[2144]: E0124 00:54:38.212011 2144 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:54:38.223294 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:54:38.224011 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:54:38.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:54:38.226880 systemd[1]: kubelet.service: Consumed 1.585s CPU time, 110.7M memory peak. Jan 24 00:54:38.249879 kernel: audit: type=1131 audit(1769216078.225:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:54:42.269342 containerd[1609]: time="2026-01-24T00:54:42.269032558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:42.271514 containerd[1609]: time="2026-01-24T00:54:42.270891186Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=29094702" Jan 24 00:54:42.273884 containerd[1609]: time="2026-01-24T00:54:42.273849140Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:42.282377 containerd[1609]: time="2026-01-24T00:54:42.282026931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:42.284879 containerd[1609]: time="2026-01-24T00:54:42.284635576Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 10.664940205s" Jan 24 00:54:42.284969 containerd[1609]: time="2026-01-24T00:54:42.284871040Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 24 00:54:42.290492 containerd[1609]: time="2026-01-24T00:54:42.290439359Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 24 00:54:48.357564 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 24 00:54:48.363254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:54:48.722258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:54:48.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:48.748040 kernel: audit: type=1130 audit(1769216088.722:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:48.769418 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:54:48.907430 kubelet[2165]: E0124 00:54:48.906846 2165 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:54:48.910972 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:54:48.911299 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:54:48.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:54:48.931651 kernel: audit: type=1131 audit(1769216088.912:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:54:48.913199 systemd[1]: kubelet.service: Consumed 366ms CPU time, 110.9M memory peak. Jan 24 00:54:49.956111 containerd[1609]: time="2026-01-24T00:54:49.954650611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:49.962006 containerd[1609]: time="2026-01-24T00:54:49.961067112Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 24 00:54:49.967859 containerd[1609]: time="2026-01-24T00:54:49.967620908Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:49.985213 containerd[1609]: time="2026-01-24T00:54:49.984640051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:49.987087 containerd[1609]: time="2026-01-24T00:54:49.986578418Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 7.696110477s" Jan 24 00:54:49.987087 containerd[1609]: time="2026-01-24T00:54:49.986616449Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 24 00:54:49.996145 containerd[1609]: time="2026-01-24T00:54:49.995904546Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 24 00:54:52.971949 update_engine[1588]: I20260124 00:54:52.971215 1588 update_attempter.cc:509] Updating boot flags... Jan 24 00:54:56.722400 containerd[1609]: time="2026-01-24T00:54:56.717192380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:56.725829 containerd[1609]: time="2026-01-24T00:54:56.725666519Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Jan 24 00:54:56.736885 containerd[1609]: time="2026-01-24T00:54:56.735188150Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:56.754693 containerd[1609]: time="2026-01-24T00:54:56.753615533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:56.758799 containerd[1609]: time="2026-01-24T00:54:56.755365332Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 6.759422734s" Jan 24 00:54:56.758799 containerd[1609]: time="2026-01-24T00:54:56.755503037Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 24 00:54:56.762964 containerd[1609]: time="2026-01-24T00:54:56.762927927Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 24 00:54:59.105195 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 24 00:54:59.121207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:54:59.648302 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:54:59.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:59.673232 kernel: audit: type=1130 audit(1769216099.648:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:59.684559 (kubelet)[2203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:55:00.000486 kubelet[2203]: E0124 00:55:00.000186 2203 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:55:00.007335 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:55:00.008286 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:55:00.014219 systemd[1]: kubelet.service: Consumed 694ms CPU time, 110.1M memory peak. Jan 24 00:55:00.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:55:00.049904 kernel: audit: type=1131 audit(1769216100.011:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:55:02.405491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3794617027.mount: Deactivated successfully. Jan 24 00:55:08.203863 containerd[1609]: time="2026-01-24T00:55:08.203629189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:55:08.209107 containerd[1609]: time="2026-01-24T00:55:08.209032005Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Jan 24 00:55:08.211483 containerd[1609]: time="2026-01-24T00:55:08.211284835Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:55:08.217045 containerd[1609]: time="2026-01-24T00:55:08.216902477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:55:08.219014 containerd[1609]: time="2026-01-24T00:55:08.218876174Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 11.455731002s" Jan 24 00:55:08.219014 containerd[1609]: time="2026-01-24T00:55:08.218954360Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 24 00:55:08.222804 containerd[1609]: time="2026-01-24T00:55:08.222067622Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 24 00:55:09.726445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1865156972.mount: Deactivated successfully. Jan 24 00:55:10.102864 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 24 00:55:10.114008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:55:10.827912 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:55:10.857443 kernel: audit: type=1130 audit(1769216110.828:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:55:10.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:55:10.869323 (kubelet)[2234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:55:11.095007 kubelet[2234]: E0124 00:55:11.094357 2234 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:55:11.100993 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:55:11.105017 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:55:11.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:55:11.113890 systemd[1]: kubelet.service: Consumed 544ms CPU time, 110.7M memory peak. Jan 24 00:55:11.128103 kernel: audit: type=1131 audit(1769216111.112:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:55:15.866055 containerd[1609]: time="2026-01-24T00:55:15.864941873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:55:15.885955 containerd[1609]: time="2026-01-24T00:55:15.885445619Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20679169" Jan 24 00:55:15.920623 containerd[1609]: time="2026-01-24T00:55:15.918930429Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:55:15.976974 containerd[1609]: time="2026-01-24T00:55:15.976282704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:55:16.006670 containerd[1609]: time="2026-01-24T00:55:16.003262875Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 7.781067826s" Jan 24 00:55:16.006670 containerd[1609]: time="2026-01-24T00:55:16.003832638Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 24 00:55:16.021264 containerd[1609]: time="2026-01-24T00:55:16.017354863Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 00:55:16.966311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount463671832.mount: Deactivated successfully. Jan 24 00:55:16.997918 containerd[1609]: time="2026-01-24T00:55:16.997559342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:55:17.004251 containerd[1609]: time="2026-01-24T00:55:17.003995560Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=501" Jan 24 00:55:17.011788 containerd[1609]: time="2026-01-24T00:55:17.010342495Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:55:17.044589 containerd[1609]: time="2026-01-24T00:55:17.042677609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:55:17.047317 containerd[1609]: time="2026-01-24T00:55:17.046275147Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.028100636s" Jan 24 00:55:17.047317 containerd[1609]: time="2026-01-24T00:55:17.046364994Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 00:55:17.049024 containerd[1609]: time="2026-01-24T00:55:17.048919538Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 24 00:55:18.175119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3632537237.mount: Deactivated successfully. Jan 24 00:55:21.353930 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 24 00:55:21.362369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:55:21.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:55:21.892796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:55:21.913625 kernel: audit: type=1130 audit(1769216121.892:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:55:21.934680 (kubelet)[2348]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:55:22.167879 kubelet[2348]: E0124 00:55:22.166341 2348 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:55:22.173334 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:55:22.173798 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:55:22.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:55:22.177535 systemd[1]: kubelet.service: Consumed 545ms CPU time, 110.8M memory peak. Jan 24 00:55:22.190309 kernel: audit: type=1131 audit(1769216122.175:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:55:30.355869 containerd[1609]: time="2026-01-24T00:55:30.355636232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:55:30.358586 containerd[1609]: time="2026-01-24T00:55:30.357656862Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58133605" Jan 24 00:55:30.362591 containerd[1609]: time="2026-01-24T00:55:30.360561643Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:55:30.379522 containerd[1609]: time="2026-01-24T00:55:30.376090799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:55:30.379522 containerd[1609]: time="2026-01-24T00:55:30.377013712Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 13.328021069s" Jan 24 00:55:30.379522 containerd[1609]: time="2026-01-24T00:55:30.377046723Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 24 00:55:32.301650 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 24 00:55:32.357118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:55:32.914661 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:55:32.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:55:32.943043 kernel: audit: type=1130 audit(1769216132.917:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:55:32.963696 (kubelet)[2392]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:55:33.618111 kubelet[2392]: E0124 00:55:33.617816 2392 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:55:33.650568 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:55:33.651020 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:55:33.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:55:33.652848 systemd[1]: kubelet.service: Consumed 897ms CPU time, 108.6M memory peak. Jan 24 00:55:33.668826 kernel: audit: type=1131 audit(1769216133.652:300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:55:37.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:55:37.617035 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:55:37.617303 systemd[1]: kubelet.service: Consumed 897ms CPU time, 108.6M memory peak. Jan 24 00:55:37.624417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:55:37.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:55:37.644839 kernel: audit: type=1130 audit(1769216137.616:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:55:37.644949 kernel: audit: type=1131 audit(1769216137.616:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:55:37.734686 systemd[1]: Reload requested from client PID 2408 ('systemctl') (unit session-8.scope)... Jan 24 00:55:37.734936 systemd[1]: Reloading... Jan 24 00:55:37.958875 zram_generator::config[2455]: No configuration found. Jan 24 00:55:38.653406 systemd[1]: Reloading finished in 917 ms. Jan 24 00:55:38.717000 audit: BPF prog-id=63 op=LOAD Jan 24 00:55:38.724829 kernel: audit: type=1334 audit(1769216138.717:303): prog-id=63 op=LOAD Jan 24 00:55:38.718000 audit: BPF prog-id=55 op=UNLOAD Jan 24 00:55:38.735979 kernel: audit: type=1334 audit(1769216138.718:304): prog-id=55 op=UNLOAD Jan 24 00:55:38.718000 audit: BPF prog-id=64 op=LOAD Jan 24 00:55:38.718000 audit: BPF prog-id=65 op=LOAD Jan 24 00:55:38.742144 kernel: audit: type=1334 audit(1769216138.718:305): prog-id=64 op=LOAD Jan 24 00:55:38.742198 kernel: audit: type=1334 audit(1769216138.718:306): prog-id=65 op=LOAD Jan 24 00:55:38.718000 audit: BPF prog-id=56 op=UNLOAD Jan 24 00:55:38.718000 audit: BPF prog-id=57 op=UNLOAD Jan 24 00:55:38.757937 kernel: audit: type=1334 audit(1769216138.718:307): prog-id=56 op=UNLOAD Jan 24 00:55:38.758308 kernel: audit: type=1334 audit(1769216138.718:308): prog-id=57 op=UNLOAD Jan 24 00:55:38.758395 kernel: audit: type=1334 audit(1769216138.720:309): prog-id=66 op=LOAD Jan 24 00:55:38.720000 audit: BPF prog-id=66 op=LOAD Jan 24 00:55:38.720000 audit: BPF prog-id=48 op=UNLOAD Jan 24 00:55:38.767404 kernel: audit: type=1334 audit(1769216138.720:310): prog-id=48 op=UNLOAD Jan 24 00:55:38.767519 kernel: audit: type=1334 audit(1769216138.720:311): prog-id=67 op=LOAD Jan 24 00:55:38.720000 audit: BPF prog-id=67 op=LOAD Jan 24 00:55:38.720000 audit: BPF prog-id=68 op=LOAD Jan 24 00:55:38.774850 kernel: audit: type=1334 audit(1769216138.720:312): prog-id=68 op=LOAD Jan 24 00:55:38.720000 audit: BPF prog-id=49 op=UNLOAD Jan 24 00:55:38.720000 audit: BPF prog-id=50 op=UNLOAD Jan 24 00:55:38.726000 audit: BPF prog-id=69 op=LOAD Jan 24 00:55:38.726000 audit: BPF prog-id=60 op=UNLOAD Jan 24 00:55:38.727000 audit: BPF prog-id=70 op=LOAD Jan 24 00:55:38.727000 audit: BPF prog-id=71 op=LOAD Jan 24 00:55:38.727000 audit: BPF prog-id=61 op=UNLOAD Jan 24 00:55:38.727000 audit: BPF prog-id=62 op=UNLOAD Jan 24 00:55:38.728000 audit: BPF prog-id=72 op=LOAD Jan 24 00:55:38.728000 audit: BPF prog-id=58 op=UNLOAD Jan 24 00:55:38.735000 audit: BPF prog-id=73 op=LOAD Jan 24 00:55:38.735000 audit: BPF prog-id=54 op=UNLOAD Jan 24 00:55:38.738000 audit: BPF prog-id=74 op=LOAD Jan 24 00:55:38.738000 audit: BPF prog-id=51 op=UNLOAD Jan 24 00:55:38.739000 audit: BPF prog-id=75 op=LOAD Jan 24 00:55:38.739000 audit: BPF prog-id=76 op=LOAD Jan 24 00:55:38.739000 audit: BPF prog-id=52 op=UNLOAD Jan 24 00:55:38.739000 audit: BPF prog-id=53 op=UNLOAD Jan 24 00:55:38.743000 audit: BPF prog-id=77 op=LOAD Jan 24 00:55:38.743000 audit: BPF prog-id=78 op=LOAD Jan 24 00:55:38.743000 audit: BPF prog-id=43 op=UNLOAD Jan 24 00:55:38.743000 audit: BPF prog-id=44 op=UNLOAD Jan 24 00:55:38.744000 audit: BPF prog-id=79 op=LOAD Jan 24 00:55:38.745000 audit: BPF prog-id=59 op=UNLOAD Jan 24 00:55:38.745000 audit: BPF prog-id=80 op=LOAD Jan 24 00:55:38.745000 audit: BPF prog-id=45 op=UNLOAD Jan 24 00:55:38.746000 audit: BPF prog-id=81 op=LOAD Jan 24 00:55:38.746000 audit: BPF prog-id=82 op=LOAD Jan 24 00:55:38.746000 audit: BPF prog-id=46 op=UNLOAD Jan 24 00:55:38.746000 audit: BPF prog-id=47 op=UNLOAD Jan 24 00:55:38.819940 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:55:38.820132 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:55:38.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:55:38.822001 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:55:38.822129 systemd[1]: kubelet.service: Consumed 276ms CPU time, 98.5M memory peak. Jan 24 00:55:38.829259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:55:39.771643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:55:39.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:55:39.801530 (kubelet)[2503]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:55:40.485559 kubelet[2503]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:55:40.485559 kubelet[2503]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:55:40.485559 kubelet[2503]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:55:40.485559 kubelet[2503]: I0124 00:55:40.485127 2503 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:55:42.481100 kubelet[2503]: I0124 00:55:42.480824 2503 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 24 00:55:42.481100 kubelet[2503]: I0124 00:55:42.481034 2503 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:55:42.483370 kubelet[2503]: I0124 00:55:42.482460 2503 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:55:42.689828 kubelet[2503]: I0124 00:55:42.689526 2503 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:55:42.693001 kubelet[2503]: E0124 00:55:42.692656 2503 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 24 00:55:42.802839 kubelet[2503]: I0124 00:55:42.800077 2503 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 24 00:55:42.885455 kubelet[2503]: I0124 00:55:42.885079 2503 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:55:42.895592 kubelet[2503]: I0124 00:55:42.894984 2503 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:55:42.897654 kubelet[2503]: I0124 00:55:42.896524 2503 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:55:42.897654 kubelet[2503]: I0124 00:55:42.897542 2503 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:55:42.897654 kubelet[2503]: I0124 00:55:42.897562 2503 container_manager_linux.go:303] "Creating device plugin manager" Jan 24 00:55:42.903536 kubelet[2503]: I0124 00:55:42.902942 2503 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:55:42.916642 kubelet[2503]: I0124 00:55:42.912691 2503 kubelet.go:480] "Attempting to sync node with API server" Jan 24 00:55:42.918369 kubelet[2503]: I0124 00:55:42.918175 2503 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:55:42.918369 kubelet[2503]: I0124 00:55:42.918290 2503 kubelet.go:386] "Adding apiserver pod source" Jan 24 00:55:42.918369 kubelet[2503]: I0124 00:55:42.918314 2503 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:55:42.934688 kubelet[2503]: E0124 00:55:42.929173 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:55:42.954193 kubelet[2503]: E0124 00:55:42.929280 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:55:42.987797 kubelet[2503]: I0124 00:55:42.986976 2503 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 24 00:55:42.988897 kubelet[2503]: I0124 00:55:42.988872 2503 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:55:43.058123 kubelet[2503]: W0124 00:55:43.055560 2503 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:55:43.068676 kubelet[2503]: I0124 00:55:43.068150 2503 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:55:43.068676 kubelet[2503]: I0124 00:55:43.068439 2503 server.go:1289] "Started kubelet" Jan 24 00:55:43.071288 kubelet[2503]: I0124 00:55:43.071177 2503 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:55:43.077006 kubelet[2503]: I0124 00:55:43.076596 2503 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:55:43.089697 kubelet[2503]: I0124 00:55:43.087861 2503 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:55:43.103212 kubelet[2503]: I0124 00:55:43.103094 2503 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:55:43.106039 kubelet[2503]: I0124 00:55:43.104481 2503 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:55:43.106039 kubelet[2503]: I0124 00:55:43.104889 2503 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:55:43.106039 kubelet[2503]: E0124 00:55:43.105125 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:55:43.107602 kubelet[2503]: I0124 00:55:43.107198 2503 server.go:317] "Adding debug handlers to kubelet server" Jan 24 00:55:43.111803 kubelet[2503]: E0124 00:55:43.111309 2503 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="200ms" Jan 24 00:55:43.112378 kubelet[2503]: I0124 00:55:43.112304 2503 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:55:43.113823 kubelet[2503]: I0124 00:55:43.112481 2503 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:55:43.114606 kubelet[2503]: E0124 00:55:43.114528 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:55:43.120590 kubelet[2503]: E0124 00:55:43.119879 2503 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:55:43.121209 kubelet[2503]: E0124 00:55:43.115134 2503 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.104:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.104:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d84b55a79094b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-24 00:55:43.068301643 +0000 UTC m=+3.242779920,LastTimestamp:2026-01-24 00:55:43.068301643 +0000 UTC m=+3.242779920,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 24 00:55:43.122995 kubelet[2503]: I0124 00:55:43.122927 2503 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:55:43.123058 kubelet[2503]: I0124 00:55:43.122997 2503 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:55:43.123190 kubelet[2503]: I0124 00:55:43.123116 2503 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:55:43.173069 kubelet[2503]: I0124 00:55:43.172849 2503 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 24 00:55:43.171000 audit[2523]: NETFILTER_CFG table=mangle:42 family=10 entries=2 op=nft_register_chain pid=2523 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:55:43.171000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffeeacd76c0 a2=0 a3=0 items=0 ppid=2503 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:43.171000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 24 00:55:43.178855 kubelet[2503]: I0124 00:55:43.178236 2503 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:55:43.178855 kubelet[2503]: I0124 00:55:43.178264 2503 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:55:43.178855 kubelet[2503]: I0124 00:55:43.178293 2503 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:55:43.176000 audit[2527]: NETFILTER_CFG table=mangle:43 family=2 entries=2 op=nft_register_chain pid=2527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:55:43.176000 audit[2527]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdf21cf4b0 a2=0 a3=0 items=0 ppid=2503 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:43.176000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 24 00:55:43.183000 audit[2529]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:55:43.183000 audit[2529]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4c6d04d0 a2=0 a3=0 items=0 ppid=2503 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:43.183000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 24 00:55:43.187000 audit[2528]: NETFILTER_CFG table=mangle:45 family=10 entries=1 op=nft_register_chain pid=2528 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:55:43.187000 audit[2528]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff96ca1de0 a2=0 a3=0 items=0 ppid=2503 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:43.187000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 24 00:55:43.191000 audit[2531]: NETFILTER_CFG table=nat:46 family=10 entries=1 op=nft_register_chain pid=2531 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:55:43.191000 audit[2531]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd2bf7050 a2=0 a3=0 items=0 ppid=2503 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:43.191000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 24 00:55:43.192000 audit[2532]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=2532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:55:43.192000 audit[2532]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffa24aed80 a2=0 a3=0 items=0 ppid=2503 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:43.192000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 00:55:43.199000 audit[2533]: NETFILTER_CFG table=filter:48 family=10 entries=1 op=nft_register_chain pid=2533 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:55:43.199000 audit[2533]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0c6e4c90 a2=0 a3=0 items=0 ppid=2503 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:43.199000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 24 00:55:43.206336 kubelet[2503]: E0124 00:55:43.205315 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:55:43.207140 kubelet[2503]: I0124 00:55:43.207057 2503 policy_none.go:49] "None policy: Start" Jan 24 00:55:43.207140 kubelet[2503]: I0124 00:55:43.207129 2503 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:55:43.207237 kubelet[2503]: I0124 00:55:43.207154 2503 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:55:43.213000 audit[2535]: NETFILTER_CFG table=filter:49 family=2 entries=2 op=nft_register_chain pid=2535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:55:43.213000 audit[2535]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd4e482890 a2=0 a3=0 items=0 ppid=2503 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:43.213000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 00:55:43.260003 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:55:43.263000 audit[2538]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:55:43.263000 audit[2538]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe4a2b9ac0 a2=0 a3=0 items=0 ppid=2503 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:43.263000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 24 00:55:43.270927 kubelet[2503]: I0124 00:55:43.270660 2503 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 24 00:55:43.270927 kubelet[2503]: I0124 00:55:43.270866 2503 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 24 00:55:43.270927 kubelet[2503]: I0124 00:55:43.270902 2503 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:55:43.271136 kubelet[2503]: I0124 00:55:43.270970 2503 kubelet.go:2436] "Starting kubelet main sync loop" Jan 24 00:55:43.271136 kubelet[2503]: E0124 00:55:43.271089 2503 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:55:43.272343 kubelet[2503]: E0124 00:55:43.272288 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 00:55:43.273000 audit[2540]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=2540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:55:43.273000 audit[2540]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd64a57590 a2=0 a3=0 items=0 ppid=2503 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:43.273000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 24 00:55:43.279000 audit[2541]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=2541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:55:43.279000 audit[2541]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb3640d40 a2=0 a3=0 items=0 ppid=2503 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:43.279000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 24 00:55:43.280810 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:55:43.286000 audit[2542]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2542 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:55:43.286000 audit[2542]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe0fdc3b10 a2=0 a3=0 items=0 ppid=2503 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:43.286000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 24 00:55:43.292688 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:55:43.307443 kubelet[2503]: E0124 00:55:43.307208 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:55:43.308459 kubelet[2503]: E0124 00:55:43.308172 2503 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:55:43.316150 kubelet[2503]: I0124 00:55:43.311123 2503 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:55:43.316150 kubelet[2503]: I0124 00:55:43.311148 2503 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:55:43.316150 kubelet[2503]: I0124 00:55:43.311576 2503 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:55:43.316150 kubelet[2503]: E0124 00:55:43.313108 2503 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="400ms" Jan 24 00:55:43.316150 kubelet[2503]: E0124 00:55:43.314518 2503 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:55:43.316150 kubelet[2503]: E0124 00:55:43.314828 2503 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 24 00:55:43.415338 kubelet[2503]: I0124 00:55:43.413882 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:55:43.415338 kubelet[2503]: I0124 00:55:43.413931 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:55:43.415338 kubelet[2503]: I0124 00:55:43.413960 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:55:43.415338 kubelet[2503]: I0124 00:55:43.413985 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:55:43.415338 kubelet[2503]: I0124 00:55:43.414007 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 24 00:55:43.415669 kubelet[2503]: I0124 00:55:43.414030 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fbf55e320cc55f60bafe670efe8355ae-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fbf55e320cc55f60bafe670efe8355ae\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:55:43.415669 kubelet[2503]: I0124 00:55:43.414132 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:55:43.415669 kubelet[2503]: I0124 00:55:43.414157 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fbf55e320cc55f60bafe670efe8355ae-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fbf55e320cc55f60bafe670efe8355ae\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:55:43.415669 kubelet[2503]: I0124 00:55:43.414173 2503 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:55:43.415669 kubelet[2503]: I0124 00:55:43.414250 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fbf55e320cc55f60bafe670efe8355ae-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fbf55e320cc55f60bafe670efe8355ae\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:55:43.415669 kubelet[2503]: E0124 00:55:43.415125 2503 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" Jan 24 00:55:43.422503 systemd[1]: Created slice kubepods-burstable-podfbf55e320cc55f60bafe670efe8355ae.slice - libcontainer container kubepods-burstable-podfbf55e320cc55f60bafe670efe8355ae.slice. Jan 24 00:55:43.473120 kubelet[2503]: E0124 00:55:43.472623 2503 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:55:43.489342 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 24 00:55:43.509304 kubelet[2503]: E0124 00:55:43.508173 2503 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:55:43.547362 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 24 00:55:43.757020 kubelet[2503]: E0124 00:55:43.756894 2503 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:55:43.758907 kubelet[2503]: E0124 00:55:43.758822 2503 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="800ms" Jan 24 00:55:43.759859 kubelet[2503]: E0124 00:55:43.759039 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:43.760915 kubelet[2503]: I0124 00:55:43.760845 2503 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:55:43.763786 kubelet[2503]: E0124 00:55:43.761472 2503 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" Jan 24 00:55:43.763872 containerd[1609]: time="2026-01-24T00:55:43.761916825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 24 00:55:43.776886 kubelet[2503]: E0124 00:55:43.776668 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:43.779545 containerd[1609]: time="2026-01-24T00:55:43.778239124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fbf55e320cc55f60bafe670efe8355ae,Namespace:kube-system,Attempt:0,}" Jan 24 00:55:43.812518 kubelet[2503]: E0124 00:55:43.812471 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:43.814111 containerd[1609]: time="2026-01-24T00:55:43.814037412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 24 00:55:43.997559 containerd[1609]: time="2026-01-24T00:55:43.996219611Z" level=info msg="connecting to shim 44e62118ae4aecd55a1b82640a42ce54dbd33ea9093d70e9f7bcacb9ea79c9cf" address="unix:///run/containerd/s/0d4c78297738113e5442a95cdb17fe794ae26b6f350ecf80593fe4cd5157b505" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:55:44.019651 containerd[1609]: time="2026-01-24T00:55:44.019549205Z" level=info msg="connecting to shim 5160a59a44040cfedb516304a34b98c6afb3428cfc9d56a2d187adad72e9952b" address="unix:///run/containerd/s/e5e858c70a797c84f5d3f62f600e934e67a67039468ffc689f60e96097a8e2ed" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:55:44.032016 containerd[1609]: time="2026-01-24T00:55:44.030966026Z" level=info msg="connecting to shim 026a541896235271be740f0b04dfe9c3c16f4a76b530e6a9b7b3c0b7d270bdaa" address="unix:///run/containerd/s/4a7f2d834e426f9008d8e3d8a94307f7d75de569d300175535d59ef8bc1da747" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:55:44.389571 kubelet[2503]: E0124 00:55:44.389444 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:55:44.396258 kubelet[2503]: E0124 00:55:44.389444 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:55:44.404883 kubelet[2503]: I0124 00:55:44.404448 2503 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:55:44.406232 kubelet[2503]: E0124 00:55:44.405980 2503 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" Jan 24 00:55:44.449408 kubelet[2503]: E0124 00:55:44.449215 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:55:44.452803 kubelet[2503]: E0124 00:55:44.452620 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 00:55:44.521868 systemd[1]: Started cri-containerd-026a541896235271be740f0b04dfe9c3c16f4a76b530e6a9b7b3c0b7d270bdaa.scope - libcontainer container 026a541896235271be740f0b04dfe9c3c16f4a76b530e6a9b7b3c0b7d270bdaa. Jan 24 00:55:44.561616 kubelet[2503]: E0124 00:55:44.561196 2503 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="1.6s" Jan 24 00:55:44.591519 systemd[1]: Started cri-containerd-44e62118ae4aecd55a1b82640a42ce54dbd33ea9093d70e9f7bcacb9ea79c9cf.scope - libcontainer container 44e62118ae4aecd55a1b82640a42ce54dbd33ea9093d70e9f7bcacb9ea79c9cf. Jan 24 00:55:44.716156 systemd[1]: Started cri-containerd-5160a59a44040cfedb516304a34b98c6afb3428cfc9d56a2d187adad72e9952b.scope - libcontainer container 5160a59a44040cfedb516304a34b98c6afb3428cfc9d56a2d187adad72e9952b. Jan 24 00:55:44.742125 kernel: kauditd_printk_skb: 68 callbacks suppressed Jan 24 00:55:44.744309 kernel: audit: type=1334 audit(1769216144.720:357): prog-id=83 op=LOAD Jan 24 00:55:44.720000 audit: BPF prog-id=83 op=LOAD Jan 24 00:55:44.747000 audit: BPF prog-id=84 op=LOAD Jan 24 00:55:44.774816 kernel: audit: type=1334 audit(1769216144.747:358): prog-id=84 op=LOAD Jan 24 00:55:44.774919 kernel: audit: type=1300 audit(1769216144.747:358): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2553 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:44.747000 audit[2587]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2553 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:44.779185 kubelet[2503]: E0124 00:55:44.779136 2503 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 24 00:55:44.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032366135343138393632333532373162653734306630623034646665 Jan 24 00:55:44.808935 kernel: audit: type=1327 audit(1769216144.747:358): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032366135343138393632333532373162653734306630623034646665 Jan 24 00:55:44.748000 audit: BPF prog-id=84 op=UNLOAD Jan 24 00:55:44.748000 audit[2587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2553 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:44.844653 kernel: audit: type=1334 audit(1769216144.748:359): prog-id=84 op=UNLOAD Jan 24 00:55:44.844895 kernel: audit: type=1300 audit(1769216144.748:359): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2553 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:44.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032366135343138393632333532373162653734306630623034646665 Jan 24 00:55:44.868633 kernel: audit: type=1327 audit(1769216144.748:359): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032366135343138393632333532373162653734306630623034646665 Jan 24 00:55:44.868820 kernel: audit: type=1334 audit(1769216144.754:360): prog-id=85 op=LOAD Jan 24 00:55:44.868872 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Jan 24 00:55:44.754000 audit: BPF prog-id=85 op=LOAD Jan 24 00:55:44.888855 kernel: audit: type=1300 audit(1769216144.754:360): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2553 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:44.754000 audit[2587]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2553 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:44.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032366135343138393632333532373162653734306630623034646665 Jan 24 00:55:44.763000 audit: BPF prog-id=86 op=LOAD Jan 24 00:55:44.763000 audit[2587]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2553 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:44.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032366135343138393632333532373162653734306630623034646665 Jan 24 00:55:44.763000 audit: BPF prog-id=86 op=UNLOAD Jan 24 00:55:44.763000 audit[2587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2553 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:44.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032366135343138393632333532373162653734306630623034646665 Jan 24 00:55:44.763000 audit: BPF prog-id=85 op=UNLOAD Jan 24 00:55:44.763000 audit[2587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2553 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:44.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032366135343138393632333532373162653734306630623034646665 Jan 24 00:55:44.763000 audit: BPF prog-id=87 op=LOAD Jan 24 00:55:44.763000 audit[2587]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2553 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:44.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032366135343138393632333532373162653734306630623034646665 Jan 24 00:55:44.775000 audit: BPF prog-id=88 op=LOAD Jan 24 00:55:44.779000 audit: BPF prog-id=89 op=LOAD Jan 24 00:55:44.779000 audit[2614]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2561 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:44.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434653632313138616534616563643535613162383236343061343263 Jan 24 00:55:44.779000 audit: BPF prog-id=89 op=UNLOAD Jan 24 00:55:44.779000 audit[2614]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2561 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:44.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434653632313138616534616563643535613162383236343061343263 Jan 24 00:55:44.779000 audit: BPF prog-id=90 op=LOAD Jan 24 00:55:44.779000 audit[2614]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2561 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:44.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434653632313138616534616563643535613162383236343061343263 Jan 24 00:55:44.779000 audit: BPF prog-id=91 op=LOAD Jan 24 00:55:44.779000 audit[2614]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2561 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:44.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434653632313138616534616563643535613162383236343061343263 Jan 24 00:55:44.779000 audit: BPF prog-id=91 op=UNLOAD Jan 24 00:55:44.779000 audit[2614]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2561 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:44.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434653632313138616534616563643535613162383236343061343263 Jan 24 00:55:44.779000 audit: BPF prog-id=90 op=UNLOAD Jan 24 00:55:44.779000 audit[2614]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2561 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:44.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434653632313138616534616563643535613162383236343061343263 Jan 24 00:55:44.779000 audit: BPF prog-id=92 op=LOAD Jan 24 00:55:44.779000 audit[2614]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2561 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:44.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434653632313138616534616563643535613162383236343061343263 Jan 24 00:55:44.812000 audit: BPF prog-id=93 op=LOAD Jan 24 00:55:44.814000 audit: BPF prog-id=94 op=LOAD Jan 24 00:55:44.814000 audit[2612]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=2569 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:44.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531363061353961343430343063666564623531363330346133346239 Jan 24 00:55:44.814000 audit: BPF prog-id=94 op=UNLOAD Jan 24 00:55:44.814000 audit[2612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2569 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:44.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531363061353961343430343063666564623531363330346133346239 Jan 24 00:55:44.814000 audit: BPF prog-id=95 op=LOAD Jan 24 00:55:44.814000 audit[2612]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=2569 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:44.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531363061353961343430343063666564623531363330346133346239 Jan 24 00:55:44.814000 audit: BPF prog-id=96 op=LOAD Jan 24 00:55:44.814000 audit[2612]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=2569 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:44.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531363061353961343430343063666564623531363330346133346239 Jan 24 00:55:44.884000 audit: BPF prog-id=95 op=UNLOAD Jan 24 00:55:44.884000 audit[2612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2569 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:44.884000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531363061353961343430343063666564623531363330346133346239 Jan 24 00:55:44.901000 audit: BPF prog-id=97 op=LOAD Jan 24 00:55:44.901000 audit[2612]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=2569 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:44.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531363061353961343430343063666564623531363330346133346239 Jan 24 00:55:45.044863 containerd[1609]: time="2026-01-24T00:55:45.044509912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fbf55e320cc55f60bafe670efe8355ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"44e62118ae4aecd55a1b82640a42ce54dbd33ea9093d70e9f7bcacb9ea79c9cf\"" Jan 24 00:55:45.052801 kubelet[2503]: E0124 00:55:45.050063 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:45.073625 containerd[1609]: time="2026-01-24T00:55:45.073548852Z" level=info msg="CreateContainer within sandbox \"44e62118ae4aecd55a1b82640a42ce54dbd33ea9093d70e9f7bcacb9ea79c9cf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:55:45.118887 containerd[1609]: time="2026-01-24T00:55:45.115323368Z" level=info msg="Container 32d5a12104927d5cf025be99b7bd01c6573af724050f080b103b33ab4bbe21d3: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:55:45.372096 kubelet[2503]: I0124 00:55:45.370661 2503 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:55:45.443928 kubelet[2503]: E0124 00:55:45.416627 2503 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" Jan 24 00:55:45.553961 containerd[1609]: time="2026-01-24T00:55:45.553669617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"026a541896235271be740f0b04dfe9c3c16f4a76b530e6a9b7b3c0b7d270bdaa\"" Jan 24 00:55:45.556013 containerd[1609]: time="2026-01-24T00:55:45.555801325Z" level=info msg="CreateContainer within sandbox \"44e62118ae4aecd55a1b82640a42ce54dbd33ea9093d70e9f7bcacb9ea79c9cf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"32d5a12104927d5cf025be99b7bd01c6573af724050f080b103b33ab4bbe21d3\"" Jan 24 00:55:45.557461 containerd[1609]: time="2026-01-24T00:55:45.556941562Z" level=info msg="StartContainer for \"32d5a12104927d5cf025be99b7bd01c6573af724050f080b103b33ab4bbe21d3\"" Jan 24 00:55:45.558194 kubelet[2503]: E0124 00:55:45.558049 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:45.561861 containerd[1609]: time="2026-01-24T00:55:45.561665703Z" level=info msg="connecting to shim 32d5a12104927d5cf025be99b7bd01c6573af724050f080b103b33ab4bbe21d3" address="unix:///run/containerd/s/0d4c78297738113e5442a95cdb17fe794ae26b6f350ecf80593fe4cd5157b505" protocol=ttrpc version=3 Jan 24 00:55:45.575474 containerd[1609]: time="2026-01-24T00:55:45.575422787Z" level=info msg="CreateContainer within sandbox \"026a541896235271be740f0b04dfe9c3c16f4a76b530e6a9b7b3c0b7d270bdaa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:55:45.603490 containerd[1609]: time="2026-01-24T00:55:45.603119158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5160a59a44040cfedb516304a34b98c6afb3428cfc9d56a2d187adad72e9952b\"" Jan 24 00:55:45.605377 kubelet[2503]: E0124 00:55:45.605063 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:45.610825 containerd[1609]: time="2026-01-24T00:55:45.610495012Z" level=info msg="Container 31d82bb0fabfefcb8fb396dd27f350cf8d5552ede80309c9b956f7566a1820b6: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:55:45.621092 containerd[1609]: time="2026-01-24T00:55:45.620376844Z" level=info msg="CreateContainer within sandbox \"5160a59a44040cfedb516304a34b98c6afb3428cfc9d56a2d187adad72e9952b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:55:45.654673 systemd[1]: Started cri-containerd-32d5a12104927d5cf025be99b7bd01c6573af724050f080b103b33ab4bbe21d3.scope - libcontainer container 32d5a12104927d5cf025be99b7bd01c6573af724050f080b103b33ab4bbe21d3. Jan 24 00:55:45.665892 containerd[1609]: time="2026-01-24T00:55:45.665651926Z" level=info msg="CreateContainer within sandbox \"026a541896235271be740f0b04dfe9c3c16f4a76b530e6a9b7b3c0b7d270bdaa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"31d82bb0fabfefcb8fb396dd27f350cf8d5552ede80309c9b956f7566a1820b6\"" Jan 24 00:55:45.668876 containerd[1609]: time="2026-01-24T00:55:45.668838973Z" level=info msg="StartContainer for \"31d82bb0fabfefcb8fb396dd27f350cf8d5552ede80309c9b956f7566a1820b6\"" Jan 24 00:55:45.673985 containerd[1609]: time="2026-01-24T00:55:45.673699774Z" level=info msg="connecting to shim 31d82bb0fabfefcb8fb396dd27f350cf8d5552ede80309c9b956f7566a1820b6" address="unix:///run/containerd/s/4a7f2d834e426f9008d8e3d8a94307f7d75de569d300175535d59ef8bc1da747" protocol=ttrpc version=3 Jan 24 00:55:45.687497 containerd[1609]: time="2026-01-24T00:55:45.687456079Z" level=info msg="Container 6c8d0809bc53d51e6bca0444859e5fe08308b31485470288d257cec26f40e1c1: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:55:45.716314 containerd[1609]: time="2026-01-24T00:55:45.716221169Z" level=info msg="CreateContainer within sandbox \"5160a59a44040cfedb516304a34b98c6afb3428cfc9d56a2d187adad72e9952b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6c8d0809bc53d51e6bca0444859e5fe08308b31485470288d257cec26f40e1c1\"" Jan 24 00:55:45.718806 containerd[1609]: time="2026-01-24T00:55:45.717637492Z" level=info msg="StartContainer for \"6c8d0809bc53d51e6bca0444859e5fe08308b31485470288d257cec26f40e1c1\"" Jan 24 00:55:45.726142 containerd[1609]: time="2026-01-24T00:55:45.726109242Z" level=info msg="connecting to shim 6c8d0809bc53d51e6bca0444859e5fe08308b31485470288d257cec26f40e1c1" address="unix:///run/containerd/s/e5e858c70a797c84f5d3f62f600e934e67a67039468ffc689f60e96097a8e2ed" protocol=ttrpc version=3 Jan 24 00:55:45.762177 systemd[1]: Started cri-containerd-31d82bb0fabfefcb8fb396dd27f350cf8d5552ede80309c9b956f7566a1820b6.scope - libcontainer container 31d82bb0fabfefcb8fb396dd27f350cf8d5552ede80309c9b956f7566a1820b6. Jan 24 00:55:45.780000 audit: BPF prog-id=98 op=LOAD Jan 24 00:55:45.784000 audit: BPF prog-id=99 op=LOAD Jan 24 00:55:45.784000 audit[2677]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2561 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:45.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332643561313231303439323764356366303235626539396237626430 Jan 24 00:55:45.785000 audit: BPF prog-id=99 op=UNLOAD Jan 24 00:55:45.785000 audit[2677]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2561 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:45.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332643561313231303439323764356366303235626539396237626430 Jan 24 00:55:45.786000 audit: BPF prog-id=100 op=LOAD Jan 24 00:55:45.786000 audit[2677]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2561 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:45.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332643561313231303439323764356366303235626539396237626430 Jan 24 00:55:45.789000 audit: BPF prog-id=101 op=LOAD Jan 24 00:55:45.789000 audit[2677]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2561 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:45.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332643561313231303439323764356366303235626539396237626430 Jan 24 00:55:45.789000 audit: BPF prog-id=101 op=UNLOAD Jan 24 00:55:45.789000 audit[2677]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2561 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:45.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332643561313231303439323764356366303235626539396237626430 Jan 24 00:55:45.789000 audit: BPF prog-id=100 op=UNLOAD Jan 24 00:55:45.789000 audit[2677]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2561 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:45.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332643561313231303439323764356366303235626539396237626430 Jan 24 00:55:45.789000 audit: BPF prog-id=102 op=LOAD Jan 24 00:55:45.789000 audit[2677]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2561 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:45.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332643561313231303439323764356366303235626539396237626430 Jan 24 00:55:45.882940 systemd[1]: Started cri-containerd-6c8d0809bc53d51e6bca0444859e5fe08308b31485470288d257cec26f40e1c1.scope - libcontainer container 6c8d0809bc53d51e6bca0444859e5fe08308b31485470288d257cec26f40e1c1. Jan 24 00:55:45.910833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount851129906.mount: Deactivated successfully. Jan 24 00:55:46.309690 kubelet[2503]: E0124 00:55:46.308661 2503 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="3.2s" Jan 24 00:55:46.312301 kubelet[2503]: E0124 00:55:46.312023 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:55:46.365000 audit: BPF prog-id=103 op=LOAD Jan 24 00:55:46.390000 audit: BPF prog-id=104 op=LOAD Jan 24 00:55:46.394000 audit: BPF prog-id=105 op=LOAD Jan 24 00:55:46.394000 audit[2709]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2569 pid=2709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:46.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663386430383039626335336435316536626361303434343835396535 Jan 24 00:55:46.394000 audit: BPF prog-id=105 op=UNLOAD Jan 24 00:55:46.394000 audit[2709]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2569 pid=2709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:46.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663386430383039626335336435316536626361303434343835396535 Jan 24 00:55:46.396000 audit: BPF prog-id=106 op=LOAD Jan 24 00:55:46.396000 audit[2709]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2569 pid=2709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:46.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663386430383039626335336435316536626361303434343835396535 Jan 24 00:55:46.397000 audit: BPF prog-id=107 op=LOAD Jan 24 00:55:46.397000 audit[2709]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2569 pid=2709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:46.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663386430383039626335336435316536626361303434343835396535 Jan 24 00:55:46.397000 audit: BPF prog-id=107 op=UNLOAD Jan 24 00:55:46.397000 audit[2709]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2569 pid=2709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:46.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663386430383039626335336435316536626361303434343835396535 Jan 24 00:55:46.397000 audit: BPF prog-id=106 op=UNLOAD Jan 24 00:55:46.397000 audit[2709]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2569 pid=2709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:46.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663386430383039626335336435316536626361303434343835396535 Jan 24 00:55:46.397000 audit: BPF prog-id=108 op=LOAD Jan 24 00:55:46.397000 audit[2709]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2569 pid=2709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:46.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663386430383039626335336435316536626361303434343835396535 Jan 24 00:55:46.401000 audit: BPF prog-id=109 op=LOAD Jan 24 00:55:46.401000 audit[2692]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2553 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:46.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331643832626230666162666566636238666233393664643237663335 Jan 24 00:55:46.401000 audit: BPF prog-id=109 op=UNLOAD Jan 24 00:55:46.401000 audit[2692]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2553 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:46.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331643832626230666162666566636238666233393664643237663335 Jan 24 00:55:46.401000 audit: BPF prog-id=110 op=LOAD Jan 24 00:55:46.401000 audit[2692]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2553 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:46.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331643832626230666162666566636238666233393664643237663335 Jan 24 00:55:46.401000 audit: BPF prog-id=111 op=LOAD Jan 24 00:55:46.401000 audit[2692]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2553 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:46.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331643832626230666162666566636238666233393664643237663335 Jan 24 00:55:46.401000 audit: BPF prog-id=111 op=UNLOAD Jan 24 00:55:46.401000 audit[2692]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2553 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:46.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331643832626230666162666566636238666233393664643237663335 Jan 24 00:55:46.401000 audit: BPF prog-id=110 op=UNLOAD Jan 24 00:55:46.401000 audit[2692]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2553 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:46.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331643832626230666162666566636238666233393664643237663335 Jan 24 00:55:46.401000 audit: BPF prog-id=112 op=LOAD Jan 24 00:55:46.401000 audit[2692]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2553 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:46.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331643832626230666162666566636238666233393664643237663335 Jan 24 00:55:46.469995 containerd[1609]: time="2026-01-24T00:55:46.469903577Z" level=info msg="StartContainer for \"32d5a12104927d5cf025be99b7bd01c6573af724050f080b103b33ab4bbe21d3\" returns successfully" Jan 24 00:55:46.597692 containerd[1609]: time="2026-01-24T00:55:46.597499552Z" level=info msg="StartContainer for \"6c8d0809bc53d51e6bca0444859e5fe08308b31485470288d257cec26f40e1c1\" returns successfully" Jan 24 00:55:46.611631 kubelet[2503]: E0124 00:55:46.611600 2503 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:55:46.615011 kubelet[2503]: E0124 00:55:46.614510 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:46.628189 kubelet[2503]: E0124 00:55:46.627944 2503 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:55:46.628189 kubelet[2503]: E0124 00:55:46.628100 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:46.687641 containerd[1609]: time="2026-01-24T00:55:46.687275027Z" level=info msg="StartContainer for \"31d82bb0fabfefcb8fb396dd27f350cf8d5552ede80309c9b956f7566a1820b6\" returns successfully" Jan 24 00:55:46.975470 kubelet[2503]: E0124 00:55:46.975179 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:55:47.265865 kubelet[2503]: E0124 00:55:47.262990 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:55:47.272510 kubelet[2503]: I0124 00:55:47.272419 2503 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:55:47.274679 kubelet[2503]: E0124 00:55:47.274635 2503 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" Jan 24 00:55:47.276817 kubelet[2503]: E0124 00:55:47.276590 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 00:55:47.659372 kubelet[2503]: E0124 00:55:47.659023 2503 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:55:47.659372 kubelet[2503]: E0124 00:55:47.659208 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:47.664696 kubelet[2503]: E0124 00:55:47.664262 2503 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:55:47.664696 kubelet[2503]: E0124 00:55:47.664485 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:47.795924 kubelet[2503]: E0124 00:55:47.792819 2503 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:55:47.804127 kubelet[2503]: E0124 00:55:47.803267 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:48.666571 kubelet[2503]: E0124 00:55:48.666329 2503 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:55:48.666571 kubelet[2503]: E0124 00:55:48.666545 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:48.669347 kubelet[2503]: E0124 00:55:48.668431 2503 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:55:48.669347 kubelet[2503]: E0124 00:55:48.668802 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:49.773536 kubelet[2503]: E0124 00:55:49.772537 2503 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:55:49.795306 kubelet[2503]: E0124 00:55:49.772606 2503 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:55:49.795809 kubelet[2503]: E0124 00:55:49.795785 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:49.796098 kubelet[2503]: E0124 00:55:49.795911 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:50.496448 kubelet[2503]: I0124 00:55:50.496119 2503 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:55:53.317367 kubelet[2503]: E0124 00:55:53.316095 2503 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 24 00:55:53.625997 kubelet[2503]: E0124 00:55:53.625936 2503 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 24 00:55:53.713823 kubelet[2503]: I0124 00:55:53.713495 2503 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 00:55:53.713823 kubelet[2503]: E0124 00:55:53.713542 2503 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 24 00:55:53.733461 kubelet[2503]: E0124 00:55:53.726340 2503 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188d84b55a79094b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-24 00:55:43.068301643 +0000 UTC m=+3.242779920,LastTimestamp:2026-01-24 00:55:43.068301643 +0000 UTC m=+3.242779920,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 24 00:55:53.809027 kubelet[2503]: I0124 00:55:53.808986 2503 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:55:53.855601 kubelet[2503]: E0124 00:55:53.855472 2503 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188d84b55d8bca32 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-24 00:55:43.119862322 +0000 UTC m=+3.294340599,LastTimestamp:2026-01-24 00:55:43.119862322 +0000 UTC m=+3.294340599,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 24 00:55:53.892319 kubelet[2503]: E0124 00:55:53.892124 2503 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:55:53.894122 kubelet[2503]: I0124 00:55:53.893978 2503 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:55:53.902413 kubelet[2503]: E0124 00:55:53.902123 2503 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 24 00:55:53.902413 kubelet[2503]: I0124 00:55:53.902220 2503 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:55:53.906422 kubelet[2503]: E0124 00:55:53.906391 2503 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 24 00:55:54.282800 kubelet[2503]: I0124 00:55:54.282476 2503 apiserver.go:52] "Watching apiserver" Jan 24 00:55:54.312557 kubelet[2503]: I0124 00:55:54.312493 2503 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:55:55.406234 kubelet[2503]: I0124 00:55:55.406191 2503 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:55:55.465364 kubelet[2503]: E0124 00:55:55.465086 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:56.075230 kubelet[2503]: E0124 00:55:56.072364 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:57.834498 kubelet[2503]: I0124 00:55:57.833501 2503 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:55:57.880825 kubelet[2503]: E0124 00:55:57.880252 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:57.985602 kubelet[2503]: I0124 00:55:57.985325 2503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.985302471 podStartE2EDuration="2.985302471s" podCreationTimestamp="2026-01-24 00:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:55:57.947304141 +0000 UTC m=+18.121782428" watchObservedRunningTime="2026-01-24 00:55:57.985302471 +0000 UTC m=+18.159780748" Jan 24 00:55:57.985602 kubelet[2503]: I0124 00:55:57.985489 2503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.985479299 podStartE2EDuration="985.479299ms" podCreationTimestamp="2026-01-24 00:55:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:55:57.982413952 +0000 UTC m=+18.156892230" watchObservedRunningTime="2026-01-24 00:55:57.985479299 +0000 UTC m=+18.159957596" Jan 24 00:55:58.125334 kubelet[2503]: E0124 00:55:58.119413 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:01.169478 systemd[1]: Reload requested from client PID 2792 ('systemctl') (unit session-8.scope)... Jan 24 00:56:01.169501 systemd[1]: Reloading... Jan 24 00:56:01.508814 zram_generator::config[2841]: No configuration found. Jan 24 00:56:02.203699 systemd[1]: Reloading finished in 1033 ms. Jan 24 00:56:02.270271 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:56:02.292683 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:56:02.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:56:02.294211 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:56:02.294367 systemd[1]: kubelet.service: Consumed 6.188s CPU time, 132.8M memory peak. Jan 24 00:56:02.305872 kernel: kauditd_printk_skb: 122 callbacks suppressed Jan 24 00:56:02.306224 kernel: audit: type=1131 audit(1769216162.293:404): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:56:02.300254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:56:02.307000 audit: BPF prog-id=113 op=LOAD Jan 24 00:56:02.317908 kernel: audit: type=1334 audit(1769216162.307:405): prog-id=113 op=LOAD Jan 24 00:56:02.317956 kernel: audit: type=1334 audit(1769216162.307:406): prog-id=69 op=UNLOAD Jan 24 00:56:02.317993 kernel: audit: type=1334 audit(1769216162.307:407): prog-id=114 op=LOAD Jan 24 00:56:02.318105 kernel: audit: type=1334 audit(1769216162.307:408): prog-id=115 op=LOAD Jan 24 00:56:02.318160 kernel: audit: type=1334 audit(1769216162.307:409): prog-id=70 op=UNLOAD Jan 24 00:56:02.318192 kernel: audit: type=1334 audit(1769216162.307:410): prog-id=71 op=UNLOAD Jan 24 00:56:02.318394 kernel: audit: type=1334 audit(1769216162.315:411): prog-id=116 op=LOAD Jan 24 00:56:02.318430 kernel: audit: type=1334 audit(1769216162.315:412): prog-id=63 op=UNLOAD Jan 24 00:56:02.318461 kernel: audit: type=1334 audit(1769216162.315:413): prog-id=117 op=LOAD Jan 24 00:56:02.307000 audit: BPF prog-id=69 op=UNLOAD Jan 24 00:56:02.307000 audit: BPF prog-id=114 op=LOAD Jan 24 00:56:02.307000 audit: BPF prog-id=115 op=LOAD Jan 24 00:56:02.307000 audit: BPF prog-id=70 op=UNLOAD Jan 24 00:56:02.307000 audit: BPF prog-id=71 op=UNLOAD Jan 24 00:56:02.315000 audit: BPF prog-id=116 op=LOAD Jan 24 00:56:02.315000 audit: BPF prog-id=63 op=UNLOAD Jan 24 00:56:02.315000 audit: BPF prog-id=117 op=LOAD Jan 24 00:56:02.315000 audit: BPF prog-id=118 op=LOAD Jan 24 00:56:02.315000 audit: BPF prog-id=64 op=UNLOAD Jan 24 00:56:02.315000 audit: BPF prog-id=65 op=UNLOAD Jan 24 00:56:02.322000 audit: BPF prog-id=119 op=LOAD Jan 24 00:56:02.322000 audit: BPF prog-id=66 op=UNLOAD Jan 24 00:56:02.322000 audit: BPF prog-id=120 op=LOAD Jan 24 00:56:02.322000 audit: BPF prog-id=121 op=LOAD Jan 24 00:56:02.323000 audit: BPF prog-id=67 op=UNLOAD Jan 24 00:56:02.323000 audit: BPF prog-id=68 op=UNLOAD Jan 24 00:56:02.333000 audit: BPF prog-id=122 op=LOAD Jan 24 00:56:02.333000 audit: BPF prog-id=72 op=UNLOAD Jan 24 00:56:02.350000 audit: BPF prog-id=123 op=LOAD Jan 24 00:56:02.350000 audit: BPF prog-id=80 op=UNLOAD Jan 24 00:56:02.351000 audit: BPF prog-id=124 op=LOAD Jan 24 00:56:02.351000 audit: BPF prog-id=125 op=LOAD Jan 24 00:56:02.351000 audit: BPF prog-id=81 op=UNLOAD Jan 24 00:56:02.351000 audit: BPF prog-id=82 op=UNLOAD Jan 24 00:56:02.352000 audit: BPF prog-id=126 op=LOAD Jan 24 00:56:02.352000 audit: BPF prog-id=127 op=LOAD Jan 24 00:56:02.352000 audit: BPF prog-id=77 op=UNLOAD Jan 24 00:56:02.352000 audit: BPF prog-id=78 op=UNLOAD Jan 24 00:56:02.353000 audit: BPF prog-id=128 op=LOAD Jan 24 00:56:02.353000 audit: BPF prog-id=79 op=UNLOAD Jan 24 00:56:02.355000 audit: BPF prog-id=129 op=LOAD Jan 24 00:56:02.355000 audit: BPF prog-id=74 op=UNLOAD Jan 24 00:56:02.355000 audit: BPF prog-id=130 op=LOAD Jan 24 00:56:02.355000 audit: BPF prog-id=131 op=LOAD Jan 24 00:56:02.355000 audit: BPF prog-id=75 op=UNLOAD Jan 24 00:56:02.355000 audit: BPF prog-id=76 op=UNLOAD Jan 24 00:56:02.360000 audit: BPF prog-id=132 op=LOAD Jan 24 00:56:02.362000 audit: BPF prog-id=73 op=UNLOAD Jan 24 00:56:03.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:56:03.055697 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:56:03.111929 (kubelet)[2883]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:56:03.530485 kubelet[2883]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:56:03.530485 kubelet[2883]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:56:03.530485 kubelet[2883]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:56:03.530485 kubelet[2883]: I0124 00:56:03.529565 2883 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:56:03.584615 kubelet[2883]: I0124 00:56:03.584324 2883 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 24 00:56:03.584615 kubelet[2883]: I0124 00:56:03.584404 2883 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:56:03.584963 kubelet[2883]: I0124 00:56:03.584697 2883 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:56:03.594972 kubelet[2883]: I0124 00:56:03.594843 2883 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 24 00:56:03.605465 kubelet[2883]: I0124 00:56:03.603489 2883 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:56:03.623367 kubelet[2883]: I0124 00:56:03.622912 2883 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 24 00:56:03.684528 kubelet[2883]: I0124 00:56:03.683301 2883 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:56:03.684528 kubelet[2883]: I0124 00:56:03.683945 2883 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:56:03.691844 kubelet[2883]: I0124 00:56:03.688789 2883 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:56:03.691844 kubelet[2883]: I0124 00:56:03.689149 2883 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:56:03.691844 kubelet[2883]: I0124 00:56:03.689169 2883 container_manager_linux.go:303] "Creating device plugin manager" Jan 24 00:56:03.691844 kubelet[2883]: I0124 00:56:03.689244 2883 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:56:03.691844 kubelet[2883]: I0124 00:56:03.690030 2883 kubelet.go:480] "Attempting to sync node with API server" Jan 24 00:56:03.692467 kubelet[2883]: I0124 00:56:03.690063 2883 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:56:03.692467 kubelet[2883]: I0124 00:56:03.690101 2883 kubelet.go:386] "Adding apiserver pod source" Jan 24 00:56:03.692467 kubelet[2883]: I0124 00:56:03.690129 2883 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:56:03.709218 kubelet[2883]: I0124 00:56:03.707075 2883 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 24 00:56:03.709218 kubelet[2883]: I0124 00:56:03.707818 2883 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:56:03.768812 kubelet[2883]: I0124 00:56:03.764879 2883 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:56:03.768812 kubelet[2883]: I0124 00:56:03.764943 2883 server.go:1289] "Started kubelet" Jan 24 00:56:03.769520 kubelet[2883]: I0124 00:56:03.769389 2883 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:56:03.769892 kubelet[2883]: I0124 00:56:03.769867 2883 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:56:03.772445 kubelet[2883]: I0124 00:56:03.772422 2883 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:56:03.795232 kubelet[2883]: I0124 00:56:03.772589 2883 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:56:03.826435 kubelet[2883]: I0124 00:56:03.825210 2883 server.go:317] "Adding debug handlers to kubelet server" Jan 24 00:56:03.828510 kubelet[2883]: I0124 00:56:03.795043 2883 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:56:03.828510 kubelet[2883]: I0124 00:56:03.799323 2883 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:56:03.829248 kubelet[2883]: I0124 00:56:03.828669 2883 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:56:03.829248 kubelet[2883]: I0124 00:56:03.775938 2883 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:56:03.829532 kubelet[2883]: I0124 00:56:03.799334 2883 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:56:03.852579 kubelet[2883]: I0124 00:56:03.838231 2883 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:56:03.856435 kubelet[2883]: E0124 00:56:03.853104 2883 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:56:03.856435 kubelet[2883]: I0124 00:56:03.855054 2883 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:56:03.958595 kubelet[2883]: I0124 00:56:03.958261 2883 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 24 00:56:03.964451 kubelet[2883]: I0124 00:56:03.964032 2883 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 24 00:56:03.964451 kubelet[2883]: I0124 00:56:03.964072 2883 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 24 00:56:03.964451 kubelet[2883]: I0124 00:56:03.964160 2883 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:56:03.964451 kubelet[2883]: I0124 00:56:03.964217 2883 kubelet.go:2436] "Starting kubelet main sync loop" Jan 24 00:56:03.964451 kubelet[2883]: E0124 00:56:03.964325 2883 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:56:04.067528 kubelet[2883]: E0124 00:56:04.066415 2883 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 24 00:56:04.116194 kubelet[2883]: I0124 00:56:04.114286 2883 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:56:04.116194 kubelet[2883]: I0124 00:56:04.114305 2883 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:56:04.116194 kubelet[2883]: I0124 00:56:04.114331 2883 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:56:04.116194 kubelet[2883]: I0124 00:56:04.114663 2883 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:56:04.116194 kubelet[2883]: I0124 00:56:04.114680 2883 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:56:04.116194 kubelet[2883]: I0124 00:56:04.115189 2883 policy_none.go:49] "None policy: Start" Jan 24 00:56:04.116194 kubelet[2883]: I0124 00:56:04.115215 2883 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:56:04.116194 kubelet[2883]: I0124 00:56:04.115233 2883 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:56:04.116194 kubelet[2883]: I0124 00:56:04.115362 2883 state_mem.go:75] "Updated machine memory state" Jan 24 00:56:04.185373 kubelet[2883]: E0124 00:56:04.185159 2883 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:56:04.185862 kubelet[2883]: I0124 00:56:04.185573 2883 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:56:04.185862 kubelet[2883]: I0124 00:56:04.185599 2883 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:56:04.188474 kubelet[2883]: I0124 00:56:04.187552 2883 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:56:04.194028 kubelet[2883]: E0124 00:56:04.193135 2883 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:56:04.276351 kubelet[2883]: I0124 00:56:04.273493 2883 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:56:04.277834 kubelet[2883]: I0124 00:56:04.276844 2883 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:56:04.297037 kubelet[2883]: I0124 00:56:04.296932 2883 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:56:04.315633 kubelet[2883]: I0124 00:56:04.312126 2883 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:56:04.342107 kubelet[2883]: I0124 00:56:04.341841 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fbf55e320cc55f60bafe670efe8355ae-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fbf55e320cc55f60bafe670efe8355ae\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:56:04.344580 kubelet[2883]: I0124 00:56:04.342343 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:56:04.344580 kubelet[2883]: I0124 00:56:04.342675 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:56:04.344580 kubelet[2883]: I0124 00:56:04.343198 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:56:04.344580 kubelet[2883]: I0124 00:56:04.343341 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 24 00:56:04.344580 kubelet[2883]: I0124 00:56:04.343367 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:56:04.345295 kubelet[2883]: I0124 00:56:04.343391 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:56:04.345295 kubelet[2883]: I0124 00:56:04.343413 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fbf55e320cc55f60bafe670efe8355ae-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fbf55e320cc55f60bafe670efe8355ae\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:56:04.345295 kubelet[2883]: I0124 00:56:04.343438 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fbf55e320cc55f60bafe670efe8355ae-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fbf55e320cc55f60bafe670efe8355ae\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:56:04.367392 kubelet[2883]: E0124 00:56:04.366627 2883 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 24 00:56:04.393223 kubelet[2883]: E0124 00:56:04.393104 2883 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:56:04.397861 kubelet[2883]: I0124 00:56:04.397195 2883 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 24 00:56:04.397861 kubelet[2883]: I0124 00:56:04.397408 2883 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 00:56:04.659852 kubelet[2883]: E0124 00:56:04.658287 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:04.681328 kubelet[2883]: E0124 00:56:04.681124 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:04.695123 kubelet[2883]: I0124 00:56:04.694419 2883 apiserver.go:52] "Watching apiserver" Jan 24 00:56:04.695123 kubelet[2883]: E0124 00:56:04.694901 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:04.744326 kubelet[2883]: I0124 00:56:04.743026 2883 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:56:05.177242 kubelet[2883]: E0124 00:56:05.171423 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:05.177242 kubelet[2883]: I0124 00:56:05.172446 2883 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:56:05.177242 kubelet[2883]: E0124 00:56:05.173202 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:05.222818 kubelet[2883]: E0124 00:56:05.222087 2883 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 24 00:56:05.223172 kubelet[2883]: E0124 00:56:05.223148 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:05.294686 kubelet[2883]: I0124 00:56:05.294618 2883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.294598681 podStartE2EDuration="1.294598681s" podCreationTimestamp="2026-01-24 00:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:56:05.251837918 +0000 UTC m=+2.000539773" watchObservedRunningTime="2026-01-24 00:56:05.294598681 +0000 UTC m=+2.043300536" Jan 24 00:56:05.387914 kubelet[2883]: I0124 00:56:05.387831 2883 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:56:05.390331 containerd[1609]: time="2026-01-24T00:56:05.389949547Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:56:05.395525 kubelet[2883]: I0124 00:56:05.390499 2883 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:56:06.182846 kubelet[2883]: E0124 00:56:06.182520 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:06.185004 kubelet[2883]: E0124 00:56:06.184622 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:06.500782 systemd[1]: Created slice kubepods-besteffort-pod9d176109_cfaa_4dbf_9ab3_cb540a54d2ae.slice - libcontainer container kubepods-besteffort-pod9d176109_cfaa_4dbf_9ab3_cb540a54d2ae.slice. Jan 24 00:56:06.532897 kubelet[2883]: I0124 00:56:06.532849 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9d176109-cfaa-4dbf-9ab3-cb540a54d2ae-kube-proxy\") pod \"kube-proxy-lgg6b\" (UID: \"9d176109-cfaa-4dbf-9ab3-cb540a54d2ae\") " pod="kube-system/kube-proxy-lgg6b" Jan 24 00:56:06.533251 kubelet[2883]: I0124 00:56:06.533225 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d176109-cfaa-4dbf-9ab3-cb540a54d2ae-xtables-lock\") pod \"kube-proxy-lgg6b\" (UID: \"9d176109-cfaa-4dbf-9ab3-cb540a54d2ae\") " pod="kube-system/kube-proxy-lgg6b" Jan 24 00:56:06.533586 kubelet[2883]: I0124 00:56:06.533454 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d176109-cfaa-4dbf-9ab3-cb540a54d2ae-lib-modules\") pod \"kube-proxy-lgg6b\" (UID: \"9d176109-cfaa-4dbf-9ab3-cb540a54d2ae\") " pod="kube-system/kube-proxy-lgg6b" Jan 24 00:56:06.533586 kubelet[2883]: I0124 00:56:06.533493 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pc2t\" (UniqueName: \"kubernetes.io/projected/9d176109-cfaa-4dbf-9ab3-cb540a54d2ae-kube-api-access-9pc2t\") pod \"kube-proxy-lgg6b\" (UID: \"9d176109-cfaa-4dbf-9ab3-cb540a54d2ae\") " pod="kube-system/kube-proxy-lgg6b" Jan 24 00:56:06.856189 kubelet[2883]: E0124 00:56:06.855601 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:06.873289 containerd[1609]: time="2026-01-24T00:56:06.869926249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lgg6b,Uid:9d176109-cfaa-4dbf-9ab3-cb540a54d2ae,Namespace:kube-system,Attempt:0,}" Jan 24 00:56:07.040886 containerd[1609]: time="2026-01-24T00:56:07.040661106Z" level=info msg="connecting to shim 95ef124363e83c47865bddb37e8bd620a0b7e60bfa2ad28361ce4b9657cb5915" address="unix:///run/containerd/s/a42845bfee64a9f3583f842c33273c25ac673834ce31df7612a03d1619204bea" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:56:07.159690 kubelet[2883]: I0124 00:56:07.158262 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/817d57db-d687-46b6-891d-88a772331bfb-var-lib-calico\") pod \"tigera-operator-7dcd859c48-r7zwk\" (UID: \"817d57db-d687-46b6-891d-88a772331bfb\") " pod="tigera-operator/tigera-operator-7dcd859c48-r7zwk" Jan 24 00:56:07.159690 kubelet[2883]: I0124 00:56:07.158329 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvsnp\" (UniqueName: \"kubernetes.io/projected/817d57db-d687-46b6-891d-88a772331bfb-kube-api-access-kvsnp\") pod \"tigera-operator-7dcd859c48-r7zwk\" (UID: \"817d57db-d687-46b6-891d-88a772331bfb\") " pod="tigera-operator/tigera-operator-7dcd859c48-r7zwk" Jan 24 00:56:07.167304 systemd[1]: Created slice kubepods-besteffort-pod817d57db_d687_46b6_891d_88a772331bfb.slice - libcontainer container kubepods-besteffort-pod817d57db_d687_46b6_891d_88a772331bfb.slice. Jan 24 00:56:07.196645 kubelet[2883]: E0124 00:56:07.196332 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:07.294610 systemd[1]: Started cri-containerd-95ef124363e83c47865bddb37e8bd620a0b7e60bfa2ad28361ce4b9657cb5915.scope - libcontainer container 95ef124363e83c47865bddb37e8bd620a0b7e60bfa2ad28361ce4b9657cb5915. Jan 24 00:56:07.371000 audit: BPF prog-id=133 op=LOAD Jan 24 00:56:07.382090 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 24 00:56:07.382303 kernel: audit: type=1334 audit(1769216167.371:446): prog-id=133 op=LOAD Jan 24 00:56:07.373000 audit: BPF prog-id=134 op=LOAD Jan 24 00:56:07.396150 kernel: audit: type=1334 audit(1769216167.373:447): prog-id=134 op=LOAD Jan 24 00:56:07.396303 kernel: audit: type=1300 audit(1769216167.373:447): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=2947 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:07.373000 audit[2959]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=2947 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:07.428220 kernel: audit: type=1327 audit(1769216167.373:447): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935656631323433363365383363343738363562646462333765386264 Jan 24 00:56:07.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935656631323433363365383363343738363562646462333765386264 Jan 24 00:56:07.374000 audit: BPF prog-id=134 op=UNLOAD Jan 24 00:56:07.483893 kernel: audit: type=1334 audit(1769216167.374:448): prog-id=134 op=UNLOAD Jan 24 00:56:07.484210 kernel: audit: type=1300 audit(1769216167.374:448): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2947 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:07.374000 audit[2959]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2947 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:07.497229 containerd[1609]: time="2026-01-24T00:56:07.494481658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-r7zwk,Uid:817d57db-d687-46b6-891d-88a772331bfb,Namespace:tigera-operator,Attempt:0,}" Jan 24 00:56:07.527833 kernel: audit: type=1327 audit(1769216167.374:448): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935656631323433363365383363343738363562646462333765386264 Jan 24 00:56:07.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935656631323433363365383363343738363562646462333765386264 Jan 24 00:56:07.553644 kernel: audit: type=1334 audit(1769216167.375:449): prog-id=135 op=LOAD Jan 24 00:56:07.375000 audit: BPF prog-id=135 op=LOAD Jan 24 00:56:07.375000 audit[2959]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=2947 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:07.574871 kernel: audit: type=1300 audit(1769216167.375:449): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=2947 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:07.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935656631323433363365383363343738363562646462333765386264 Jan 24 00:56:07.376000 audit: BPF prog-id=136 op=LOAD Jan 24 00:56:07.376000 audit[2959]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=2947 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:07.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935656631323433363365383363343738363562646462333765386264 Jan 24 00:56:07.376000 audit: BPF prog-id=136 op=UNLOAD Jan 24 00:56:07.594917 kernel: audit: type=1327 audit(1769216167.375:449): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935656631323433363365383363343738363562646462333765386264 Jan 24 00:56:07.376000 audit[2959]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2947 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:07.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935656631323433363365383363343738363562646462333765386264 Jan 24 00:56:07.376000 audit: BPF prog-id=135 op=UNLOAD Jan 24 00:56:07.376000 audit[2959]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2947 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:07.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935656631323433363365383363343738363562646462333765386264 Jan 24 00:56:07.376000 audit: BPF prog-id=137 op=LOAD Jan 24 00:56:07.376000 audit[2959]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=2947 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:07.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935656631323433363365383363343738363562646462333765386264 Jan 24 00:56:07.614890 containerd[1609]: time="2026-01-24T00:56:07.614692988Z" level=info msg="connecting to shim 86dca9fcf32656a9c1c5c0fd0f5bceb5acf3f6d610a4137b16a100c5036aa9fb" address="unix:///run/containerd/s/8b9b3d29bc51025cd8fd5aa8f83f6e685400a4ff12a12e43a03848c69c76561e" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:56:07.615786 containerd[1609]: time="2026-01-24T00:56:07.615417739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lgg6b,Uid:9d176109-cfaa-4dbf-9ab3-cb540a54d2ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"95ef124363e83c47865bddb37e8bd620a0b7e60bfa2ad28361ce4b9657cb5915\"" Jan 24 00:56:07.617660 kubelet[2883]: E0124 00:56:07.617633 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:07.633499 containerd[1609]: time="2026-01-24T00:56:07.633456342Z" level=info msg="CreateContainer within sandbox \"95ef124363e83c47865bddb37e8bd620a0b7e60bfa2ad28361ce4b9657cb5915\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:56:07.697692 systemd[1]: Started cri-containerd-86dca9fcf32656a9c1c5c0fd0f5bceb5acf3f6d610a4137b16a100c5036aa9fb.scope - libcontainer container 86dca9fcf32656a9c1c5c0fd0f5bceb5acf3f6d610a4137b16a100c5036aa9fb. Jan 24 00:56:07.750000 audit: BPF prog-id=138 op=LOAD Jan 24 00:56:07.754000 audit: BPF prog-id=139 op=LOAD Jan 24 00:56:07.754000 audit[3005]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2994 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:07.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836646361396663663332363536613963316335633066643066356263 Jan 24 00:56:07.754000 audit: BPF prog-id=139 op=UNLOAD Jan 24 00:56:07.754000 audit[3005]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2994 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:07.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836646361396663663332363536613963316335633066643066356263 Jan 24 00:56:07.754000 audit: BPF prog-id=140 op=LOAD Jan 24 00:56:07.754000 audit[3005]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2994 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:07.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836646361396663663332363536613963316335633066643066356263 Jan 24 00:56:07.754000 audit: BPF prog-id=141 op=LOAD Jan 24 00:56:07.754000 audit[3005]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2994 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:07.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836646361396663663332363536613963316335633066643066356263 Jan 24 00:56:07.755000 audit: BPF prog-id=141 op=UNLOAD Jan 24 00:56:07.755000 audit[3005]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2994 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:07.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836646361396663663332363536613963316335633066643066356263 Jan 24 00:56:07.755000 audit: BPF prog-id=140 op=UNLOAD Jan 24 00:56:07.755000 audit[3005]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2994 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:07.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836646361396663663332363536613963316335633066643066356263 Jan 24 00:56:07.755000 audit: BPF prog-id=142 op=LOAD Jan 24 00:56:07.755000 audit[3005]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2994 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:07.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836646361396663663332363536613963316335633066643066356263 Jan 24 00:56:07.791206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2210774167.mount: Deactivated successfully. Jan 24 00:56:07.810868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3015778648.mount: Deactivated successfully. Jan 24 00:56:07.812286 containerd[1609]: time="2026-01-24T00:56:07.811307549Z" level=info msg="Container 2715afafae0f39302d2a9d2fe25c0f000f9f2a5bfb51e3b5070d30ac8911bd50: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:56:07.847555 containerd[1609]: time="2026-01-24T00:56:07.846379149Z" level=info msg="CreateContainer within sandbox \"95ef124363e83c47865bddb37e8bd620a0b7e60bfa2ad28361ce4b9657cb5915\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2715afafae0f39302d2a9d2fe25c0f000f9f2a5bfb51e3b5070d30ac8911bd50\"" Jan 24 00:56:07.847555 containerd[1609]: time="2026-01-24T00:56:07.847162300Z" level=info msg="StartContainer for \"2715afafae0f39302d2a9d2fe25c0f000f9f2a5bfb51e3b5070d30ac8911bd50\"" Jan 24 00:56:07.858043 containerd[1609]: time="2026-01-24T00:56:07.857915507Z" level=info msg="connecting to shim 2715afafae0f39302d2a9d2fe25c0f000f9f2a5bfb51e3b5070d30ac8911bd50" address="unix:///run/containerd/s/a42845bfee64a9f3583f842c33273c25ac673834ce31df7612a03d1619204bea" protocol=ttrpc version=3 Jan 24 00:56:07.916815 containerd[1609]: time="2026-01-24T00:56:07.916604171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-r7zwk,Uid:817d57db-d687-46b6-891d-88a772331bfb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"86dca9fcf32656a9c1c5c0fd0f5bceb5acf3f6d610a4137b16a100c5036aa9fb\"" Jan 24 00:56:07.923505 containerd[1609]: time="2026-01-24T00:56:07.923326111Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 00:56:08.022618 systemd[1]: Started cri-containerd-2715afafae0f39302d2a9d2fe25c0f000f9f2a5bfb51e3b5070d30ac8911bd50.scope - libcontainer container 2715afafae0f39302d2a9d2fe25c0f000f9f2a5bfb51e3b5070d30ac8911bd50. Jan 24 00:56:08.210000 audit: BPF prog-id=143 op=LOAD Jan 24 00:56:08.210000 audit[3034]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2947 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:08.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237313561666166616530663339333032643261396432666532356330 Jan 24 00:56:08.210000 audit: BPF prog-id=144 op=LOAD Jan 24 00:56:08.210000 audit[3034]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2947 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:08.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237313561666166616530663339333032643261396432666532356330 Jan 24 00:56:08.210000 audit: BPF prog-id=144 op=UNLOAD Jan 24 00:56:08.210000 audit[3034]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2947 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:08.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237313561666166616530663339333032643261396432666532356330 Jan 24 00:56:08.210000 audit: BPF prog-id=143 op=UNLOAD Jan 24 00:56:08.210000 audit[3034]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2947 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:08.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237313561666166616530663339333032643261396432666532356330 Jan 24 00:56:08.210000 audit: BPF prog-id=145 op=LOAD Jan 24 00:56:08.210000 audit[3034]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2947 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:08.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237313561666166616530663339333032643261396432666532356330 Jan 24 00:56:08.385376 containerd[1609]: time="2026-01-24T00:56:08.385317095Z" level=info msg="StartContainer for \"2715afafae0f39302d2a9d2fe25c0f000f9f2a5bfb51e3b5070d30ac8911bd50\" returns successfully" Jan 24 00:56:09.059839 kubelet[2883]: E0124 00:56:09.058373 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:09.241268 kubelet[2883]: E0124 00:56:09.236571 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:09.241268 kubelet[2883]: E0124 00:56:09.238394 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:09.404465 kubelet[2883]: I0124 00:56:09.404368 2883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lgg6b" podStartSLOduration=3.404344069 podStartE2EDuration="3.404344069s" podCreationTimestamp="2026-01-24 00:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:56:09.404059363 +0000 UTC m=+6.152761238" watchObservedRunningTime="2026-01-24 00:56:09.404344069 +0000 UTC m=+6.153045925" Jan 24 00:56:09.819094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3575165207.mount: Deactivated successfully. Jan 24 00:56:09.870000 audit[3105]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3105 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:56:09.870000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcce1aa540 a2=0 a3=7ffcce1aa52c items=0 ppid=3048 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:09.870000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 24 00:56:09.874000 audit[3106]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:56:09.874000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb72a36d0 a2=0 a3=7ffdb72a36bc items=0 ppid=3048 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:09.874000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 24 00:56:09.886000 audit[3112]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:56:09.886000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc0c7a5330 a2=0 a3=7ffc0c7a531c items=0 ppid=3048 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:09.886000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 24 00:56:09.891000 audit[3114]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:56:09.891000 audit[3114]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd53bda790 a2=0 a3=7ffd53bda77c items=0 ppid=3048 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:09.891000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 24 00:56:09.894000 audit[3116]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3116 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:56:09.894000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdca31de20 a2=0 a3=7ffdca31de0c items=0 ppid=3048 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:09.894000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 24 00:56:09.895000 audit[3115]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3115 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:56:09.895000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffda6162110 a2=0 a3=e03bf60af4af1494 items=0 ppid=3048 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:09.895000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 24 00:56:09.995000 audit[3117]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3117 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:56:09.995000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc2a11c670 a2=0 a3=7ffc2a11c65c items=0 ppid=3048 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:09.995000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 24 00:56:10.020000 audit[3119]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:56:10.020000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffde5403260 a2=0 a3=7ffde540324c items=0 ppid=3048 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.020000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 24 00:56:10.057000 audit[3122]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3122 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:56:10.057000 audit[3122]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffeecdccc20 a2=0 a3=7ffeecdccc0c items=0 ppid=3048 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.057000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 24 00:56:10.065000 audit[3123]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3123 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:56:10.065000 audit[3123]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9b467ce0 a2=0 a3=7ffe9b467ccc items=0 ppid=3048 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.065000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 24 00:56:10.082000 audit[3125]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:56:10.082000 audit[3125]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc1f733360 a2=0 a3=7ffc1f73334c items=0 ppid=3048 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.082000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 24 00:56:10.096000 audit[3126]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3126 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:56:10.096000 audit[3126]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd96fb53c0 a2=0 a3=7ffd96fb53ac items=0 ppid=3048 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.096000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 24 00:56:10.108584 kubelet[2883]: E0124 00:56:10.108487 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:10.109000 audit[3128]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3128 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:56:10.109000 audit[3128]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe237a7560 a2=0 a3=7ffe237a754c items=0 ppid=3048 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.109000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 24 00:56:10.148000 audit[3131]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:56:10.148000 audit[3131]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd68f139b0 a2=0 a3=7ffd68f1399c items=0 ppid=3048 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.148000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 24 00:56:10.155000 audit[3132]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3132 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:56:10.155000 audit[3132]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd6757a00 a2=0 a3=7fffd67579ec items=0 ppid=3048 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.155000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 24 00:56:10.174000 audit[3134]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:56:10.174000 audit[3134]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdba796080 a2=0 a3=7ffdba79606c items=0 ppid=3048 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.174000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 24 00:56:10.179000 audit[3135]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:56:10.179000 audit[3135]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb0c18ad0 a2=0 a3=7ffdb0c18abc items=0 ppid=3048 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.179000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 24 00:56:10.190000 audit[3137]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:56:10.190000 audit[3137]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeb0fe1c40 a2=0 a3=7ffeb0fe1c2c items=0 ppid=3048 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.190000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 24 00:56:10.207000 audit[3140]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3140 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:56:10.207000 audit[3140]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdffa4d3e0 a2=0 a3=7ffdffa4d3cc items=0 ppid=3048 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.207000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 24 00:56:10.232000 audit[3143]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:56:10.232000 audit[3143]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff0293fe60 a2=0 a3=7fff0293fe4c items=0 ppid=3048 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.232000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 24 00:56:10.245000 audit[3144]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3144 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:56:10.245000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff87bcd530 a2=0 a3=7fff87bcd51c items=0 ppid=3048 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.245000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 24 00:56:10.250116 kubelet[2883]: E0124 00:56:10.247154 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:10.250116 kubelet[2883]: E0124 00:56:10.248276 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:10.263000 audit[3146]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:56:10.263000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd7abbe310 a2=0 a3=7ffd7abbe2fc items=0 ppid=3048 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.263000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:56:10.288000 audit[3149]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3149 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:56:10.288000 audit[3149]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd8e75b020 a2=0 a3=7ffd8e75b00c items=0 ppid=3048 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.288000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:56:10.293000 audit[3150]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3150 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:56:10.293000 audit[3150]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc377f3f90 a2=0 a3=7ffc377f3f7c items=0 ppid=3048 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.293000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 24 00:56:10.309000 audit[3152]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:56:10.309000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe8750a750 a2=0 a3=7ffe8750a73c items=0 ppid=3048 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.309000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 24 00:56:10.458000 audit[3158]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:10.458000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffead0dc7b0 a2=0 a3=7ffead0dc79c items=0 ppid=3048 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.458000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:10.498000 audit[3158]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3158 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:10.498000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffead0dc7b0 a2=0 a3=7ffead0dc79c items=0 ppid=3048 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.498000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:10.505000 audit[3163]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3163 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:56:10.505000 audit[3163]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe18154ec0 a2=0 a3=7ffe18154eac items=0 ppid=3048 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.505000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 24 00:56:10.522000 audit[3165]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3165 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:56:10.522000 audit[3165]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd9cc88360 a2=0 a3=7ffd9cc8834c items=0 ppid=3048 pid=3165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.522000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 24 00:56:10.572000 audit[3168]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3168 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:56:10.572000 audit[3168]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffebce425f0 a2=0 a3=7ffebce425dc items=0 ppid=3048 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.572000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 24 00:56:10.578000 audit[3169]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3169 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:56:10.578000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff7adc36b0 a2=0 a3=7fff7adc369c items=0 ppid=3048 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.578000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 24 00:56:10.588000 audit[3171]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3171 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:56:10.588000 audit[3171]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd5db09d90 a2=0 a3=7ffd5db09d7c items=0 ppid=3048 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.588000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 24 00:56:10.613000 audit[3172]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3172 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:56:10.613000 audit[3172]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff836a5e70 a2=0 a3=7fff836a5e5c items=0 ppid=3048 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.613000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 24 00:56:10.677000 audit[3174]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3174 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:56:10.677000 audit[3174]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff8c69b480 a2=0 a3=7fff8c69b46c items=0 ppid=3048 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.677000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 24 00:56:10.713000 audit[3177]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3177 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:56:10.713000 audit[3177]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fffd30e0370 a2=0 a3=7fffd30e035c items=0 ppid=3048 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.713000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 24 00:56:10.717000 audit[3178]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3178 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:56:10.717000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfaf823c0 a2=0 a3=7ffdfaf823ac items=0 ppid=3048 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.717000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 24 00:56:10.735000 audit[3180]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:56:10.735000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff7ae0dfb0 a2=0 a3=7fff7ae0df9c items=0 ppid=3048 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.735000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 24 00:56:10.740000 audit[3181]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3181 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:56:10.740000 audit[3181]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe33691070 a2=0 a3=7ffe3369105c items=0 ppid=3048 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.740000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 24 00:56:10.756000 audit[3183]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:56:10.756000 audit[3183]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcbecd3560 a2=0 a3=7ffcbecd354c items=0 ppid=3048 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.756000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 24 00:56:10.779000 audit[3186]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:56:10.779000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff3822db00 a2=0 a3=7fff3822daec items=0 ppid=3048 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.779000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 24 00:56:10.800000 audit[3189]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3189 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:56:10.800000 audit[3189]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff5c8fbb30 a2=0 a3=7fff5c8fbb1c items=0 ppid=3048 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.800000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 24 00:56:10.804000 audit[3190]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3190 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:56:10.804000 audit[3190]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd44b3e800 a2=0 a3=7ffd44b3e7ec items=0 ppid=3048 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.804000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 24 00:56:10.820000 audit[3192]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3192 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:56:10.820000 audit[3192]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffce65f5f60 a2=0 a3=7ffce65f5f4c items=0 ppid=3048 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.820000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:56:10.852000 audit[3195]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:56:10.852000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffb5d95400 a2=0 a3=7fffb5d953ec items=0 ppid=3048 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.852000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:56:10.857000 audit[3196]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3196 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:56:10.857000 audit[3196]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbd26f150 a2=0 a3=7fffbd26f13c items=0 ppid=3048 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.857000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 24 00:56:10.877000 audit[3198]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3198 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:56:10.877000 audit[3198]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc952ec4f0 a2=0 a3=7ffc952ec4dc items=0 ppid=3048 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.877000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 24 00:56:10.887000 audit[3199]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3199 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:56:10.887000 audit[3199]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe91a5dfb0 a2=0 a3=7ffe91a5df9c items=0 ppid=3048 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.887000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 24 00:56:10.899000 audit[3201]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3201 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:56:10.899000 audit[3201]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe2b27b3e0 a2=0 a3=7ffe2b27b3cc items=0 ppid=3048 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.899000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 00:56:10.919000 audit[3204]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3204 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:56:10.919000 audit[3204]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe27a13c60 a2=0 a3=7ffe27a13c4c items=0 ppid=3048 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.919000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 00:56:10.939000 audit[3206]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3206 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 24 00:56:10.939000 audit[3206]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc9b1158c0 a2=0 a3=7ffc9b1158ac items=0 ppid=3048 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.939000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:10.940000 audit[3206]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3206 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 24 00:56:10.940000 audit[3206]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc9b1158c0 a2=0 a3=7ffc9b1158ac items=0 ppid=3048 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:10.940000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:14.149278 kubelet[2883]: E0124 00:56:14.149228 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:14.276825 kubelet[2883]: E0124 00:56:14.275184 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:15.286275 kubelet[2883]: E0124 00:56:15.286214 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:17.506372 containerd[1609]: time="2026-01-24T00:56:17.506116840Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:17.525901 containerd[1609]: time="2026-01-24T00:56:17.525368184Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 24 00:56:17.544036 containerd[1609]: time="2026-01-24T00:56:17.541328413Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:17.547793 containerd[1609]: time="2026-01-24T00:56:17.547578906Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:17.549765 containerd[1609]: time="2026-01-24T00:56:17.549021220Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 9.625526466s" Jan 24 00:56:17.549765 containerd[1609]: time="2026-01-24T00:56:17.549101449Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 00:56:17.585770 containerd[1609]: time="2026-01-24T00:56:17.585507150Z" level=info msg="CreateContainer within sandbox \"86dca9fcf32656a9c1c5c0fd0f5bceb5acf3f6d610a4137b16a100c5036aa9fb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 00:56:17.668487 containerd[1609]: time="2026-01-24T00:56:17.667248829Z" level=info msg="Container b44d29a801cc5eb6c3dfdca21fea9d3ab892f6f8c06ed3bd78237d68ed63d3c6: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:56:17.688401 containerd[1609]: time="2026-01-24T00:56:17.688023588Z" level=info msg="CreateContainer within sandbox \"86dca9fcf32656a9c1c5c0fd0f5bceb5acf3f6d610a4137b16a100c5036aa9fb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b44d29a801cc5eb6c3dfdca21fea9d3ab892f6f8c06ed3bd78237d68ed63d3c6\"" Jan 24 00:56:17.689146 containerd[1609]: time="2026-01-24T00:56:17.689050885Z" level=info msg="StartContainer for \"b44d29a801cc5eb6c3dfdca21fea9d3ab892f6f8c06ed3bd78237d68ed63d3c6\"" Jan 24 00:56:17.691546 containerd[1609]: time="2026-01-24T00:56:17.691485720Z" level=info msg="connecting to shim b44d29a801cc5eb6c3dfdca21fea9d3ab892f6f8c06ed3bd78237d68ed63d3c6" address="unix:///run/containerd/s/8b9b3d29bc51025cd8fd5aa8f83f6e685400a4ff12a12e43a03848c69c76561e" protocol=ttrpc version=3 Jan 24 00:56:17.793052 systemd[1]: Started cri-containerd-b44d29a801cc5eb6c3dfdca21fea9d3ab892f6f8c06ed3bd78237d68ed63d3c6.scope - libcontainer container b44d29a801cc5eb6c3dfdca21fea9d3ab892f6f8c06ed3bd78237d68ed63d3c6. Jan 24 00:56:17.989885 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 24 00:56:17.990943 kernel: audit: type=1334 audit(1769216177.974:518): prog-id=146 op=LOAD Jan 24 00:56:17.974000 audit: BPF prog-id=146 op=LOAD Jan 24 00:56:18.018000 audit: BPF prog-id=147 op=LOAD Jan 24 00:56:18.018000 audit[3212]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2994 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:18.077597 kernel: audit: type=1334 audit(1769216178.018:519): prog-id=147 op=LOAD Jan 24 00:56:18.079071 kernel: audit: type=1300 audit(1769216178.018:519): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2994 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:18.079185 kernel: audit: type=1327 audit(1769216178.018:519): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234346432396138303163633565623663336466646361323166656139 Jan 24 00:56:18.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234346432396138303163633565623663336466646361323166656139 Jan 24 00:56:18.018000 audit: BPF prog-id=147 op=UNLOAD Jan 24 00:56:18.171671 kernel: audit: type=1334 audit(1769216178.018:520): prog-id=147 op=UNLOAD Jan 24 00:56:18.218222 kernel: audit: type=1300 audit(1769216178.018:520): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2994 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:18.018000 audit[3212]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2994 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:18.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234346432396138303163633565623663336466646361323166656139 Jan 24 00:56:18.313762 kernel: audit: type=1327 audit(1769216178.018:520): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234346432396138303163633565623663336466646361323166656139 Jan 24 00:56:18.313945 kernel: audit: type=1334 audit(1769216178.018:521): prog-id=148 op=LOAD Jan 24 00:56:18.018000 audit: BPF prog-id=148 op=LOAD Jan 24 00:56:18.320872 kernel: audit: type=1300 audit(1769216178.018:521): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2994 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:18.018000 audit[3212]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2994 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:18.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234346432396138303163633565623663336466646361323166656139 Jan 24 00:56:18.394454 kernel: audit: type=1327 audit(1769216178.018:521): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234346432396138303163633565623663336466646361323166656139 Jan 24 00:56:18.018000 audit: BPF prog-id=149 op=LOAD Jan 24 00:56:18.018000 audit[3212]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2994 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:18.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234346432396138303163633565623663336466646361323166656139 Jan 24 00:56:18.019000 audit: BPF prog-id=149 op=UNLOAD Jan 24 00:56:18.019000 audit[3212]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2994 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:18.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234346432396138303163633565623663336466646361323166656139 Jan 24 00:56:18.019000 audit: BPF prog-id=148 op=UNLOAD Jan 24 00:56:18.019000 audit[3212]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2994 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:18.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234346432396138303163633565623663336466646361323166656139 Jan 24 00:56:18.019000 audit: BPF prog-id=150 op=LOAD Jan 24 00:56:18.019000 audit[3212]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2994 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:18.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234346432396138303163633565623663336466646361323166656139 Jan 24 00:56:18.421324 containerd[1609]: time="2026-01-24T00:56:18.420161370Z" level=info msg="StartContainer for \"b44d29a801cc5eb6c3dfdca21fea9d3ab892f6f8c06ed3bd78237d68ed63d3c6\" returns successfully" Jan 24 00:56:28.766445 sudo[1816]: pam_unix(sudo:session): session closed for user root Jan 24 00:56:28.765000 audit[1816]: USER_END pid=1816 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:56:28.773391 sshd[1815]: Connection closed by 10.0.0.1 port 40834 Jan 24 00:56:28.774007 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 24 00:56:28.774089 kernel: audit: type=1106 audit(1769216188.765:526): pid=1816 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:56:28.774459 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Jan 24 00:56:28.784228 systemd-logind[1585]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:56:28.786486 systemd[1]: sshd@6-10.0.0.104:22-10.0.0.1:40834.service: Deactivated successfully. Jan 24 00:56:28.797528 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:56:28.799472 systemd[1]: session-8.scope: Consumed 11.247s CPU time, 213.1M memory peak. Jan 24 00:56:28.811449 systemd-logind[1585]: Removed session 8. Jan 24 00:56:28.766000 audit[1816]: CRED_DISP pid=1816 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:56:28.844954 kernel: audit: type=1104 audit(1769216188.766:527): pid=1816 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:56:28.776000 audit[1811]: USER_END pid=1811 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:56:28.869945 kernel: audit: type=1106 audit(1769216188.776:528): pid=1811 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:56:28.776000 audit[1811]: CRED_DISP pid=1811 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:56:28.911103 kernel: audit: type=1104 audit(1769216188.776:529): pid=1811 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:56:28.911264 kernel: audit: type=1131 audit(1769216188.786:530): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.104:22-10.0.0.1:40834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:56:28.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.104:22-10.0.0.1:40834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:56:29.710000 audit[3299]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3299 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:29.765857 kernel: audit: type=1325 audit(1769216189.710:531): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3299 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:29.766051 kernel: audit: type=1300 audit(1769216189.710:531): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdc8f91730 a2=0 a3=7ffdc8f9171c items=0 ppid=3048 pid=3299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:29.710000 audit[3299]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdc8f91730 a2=0 a3=7ffdc8f9171c items=0 ppid=3048 pid=3299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:29.710000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:29.795516 kernel: audit: type=1327 audit(1769216189.710:531): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:29.795615 kernel: audit: type=1325 audit(1769216189.768:532): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3299 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:29.768000 audit[3299]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3299 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:29.768000 audit[3299]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdc8f91730 a2=0 a3=0 items=0 ppid=3048 pid=3299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:29.768000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:29.822984 kernel: audit: type=1300 audit(1769216189.768:532): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdc8f91730 a2=0 a3=0 items=0 ppid=3048 pid=3299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:29.867000 audit[3301]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3301 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:29.867000 audit[3301]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcca1a4300 a2=0 a3=7ffcca1a42ec items=0 ppid=3048 pid=3301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:29.867000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:29.877000 audit[3301]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3301 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:29.877000 audit[3301]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcca1a4300 a2=0 a3=0 items=0 ppid=3048 pid=3301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:29.877000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:35.719901 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 24 00:56:35.720034 kernel: audit: type=1325 audit(1769216195.704:535): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3303 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:35.704000 audit[3303]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3303 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:35.739915 kernel: audit: type=1300 audit(1769216195.704:535): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe50361c70 a2=0 a3=7ffe50361c5c items=0 ppid=3048 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:35.704000 audit[3303]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe50361c70 a2=0 a3=7ffe50361c5c items=0 ppid=3048 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:35.785569 kernel: audit: type=1327 audit(1769216195.704:535): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:35.704000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:35.773000 audit[3303]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3303 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:35.798941 kernel: audit: type=1325 audit(1769216195.773:536): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3303 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:35.799058 kernel: audit: type=1300 audit(1769216195.773:536): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe50361c70 a2=0 a3=0 items=0 ppid=3048 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:35.773000 audit[3303]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe50361c70 a2=0 a3=0 items=0 ppid=3048 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:35.773000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:35.864289 kernel: audit: type=1327 audit(1769216195.773:536): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:35.901000 audit[3305]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3305 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:35.923026 kernel: audit: type=1325 audit(1769216195.901:537): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3305 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:35.901000 audit[3305]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff84b81020 a2=0 a3=7fff84b8100c items=0 ppid=3048 pid=3305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:35.966481 kernel: audit: type=1300 audit(1769216195.901:537): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff84b81020 a2=0 a3=7fff84b8100c items=0 ppid=3048 pid=3305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:35.966838 kernel: audit: type=1327 audit(1769216195.901:537): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:35.901000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:35.977000 audit[3305]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3305 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:35.995030 kernel: audit: type=1325 audit(1769216195.977:538): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3305 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:35.977000 audit[3305]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff84b81020 a2=0 a3=0 items=0 ppid=3048 pid=3305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:35.977000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:37.496000 audit[3308]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3308 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:37.496000 audit[3308]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffde2d44ff0 a2=0 a3=7ffde2d44fdc items=0 ppid=3048 pid=3308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:37.496000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:37.502000 audit[3308]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3308 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:37.502000 audit[3308]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffde2d44ff0 a2=0 a3=0 items=0 ppid=3048 pid=3308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:37.502000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:38.538000 audit[3311]: NETFILTER_CFG table=filter:115 family=2 entries=20 op=nft_register_rule pid=3311 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:38.538000 audit[3311]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdd88c5340 a2=0 a3=7ffdd88c532c items=0 ppid=3048 pid=3311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:38.538000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:38.557000 audit[3311]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3311 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:38.557000 audit[3311]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd88c5340 a2=0 a3=0 items=0 ppid=3048 pid=3311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:38.557000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:41.458000 audit[3315]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3315 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:41.472849 kernel: kauditd_printk_skb: 14 callbacks suppressed Jan 24 00:56:41.472971 kernel: audit: type=1325 audit(1769216201.458:543): table=filter:117 family=2 entries=21 op=nft_register_rule pid=3315 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:41.458000 audit[3315]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd136d0340 a2=0 a3=7ffd136d032c items=0 ppid=3048 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:41.542987 kubelet[2883]: I0124 00:56:41.542534 2883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-r7zwk" podStartSLOduration=25.907984988 podStartE2EDuration="35.54251383s" podCreationTimestamp="2026-01-24 00:56:06 +0000 UTC" firstStartedPulling="2026-01-24 00:56:07.921817339 +0000 UTC m=+4.670519204" lastFinishedPulling="2026-01-24 00:56:17.556346191 +0000 UTC m=+14.305048046" observedRunningTime="2026-01-24 00:56:19.482659689 +0000 UTC m=+16.231361544" watchObservedRunningTime="2026-01-24 00:56:41.54251383 +0000 UTC m=+38.291215686" Jan 24 00:56:41.458000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:41.570309 kernel: audit: type=1300 audit(1769216201.458:543): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd136d0340 a2=0 a3=7ffd136d032c items=0 ppid=3048 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:41.570654 kernel: audit: type=1327 audit(1769216201.458:543): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:41.579000 audit[3315]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3315 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:41.606309 kernel: audit: type=1325 audit(1769216201.579:544): table=nat:118 family=2 entries=12 op=nft_register_rule pid=3315 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:41.579000 audit[3315]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd136d0340 a2=0 a3=0 items=0 ppid=3048 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:41.652597 kubelet[2883]: I0124 00:56:41.619381 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de264b0a-b854-4ae8-ab32-25ff23b321a2-tigera-ca-bundle\") pod \"calico-typha-b646fdd4b-fphh4\" (UID: \"de264b0a-b854-4ae8-ab32-25ff23b321a2\") " pod="calico-system/calico-typha-b646fdd4b-fphh4" Jan 24 00:56:41.652597 kubelet[2883]: I0124 00:56:41.619430 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/de264b0a-b854-4ae8-ab32-25ff23b321a2-typha-certs\") pod \"calico-typha-b646fdd4b-fphh4\" (UID: \"de264b0a-b854-4ae8-ab32-25ff23b321a2\") " pod="calico-system/calico-typha-b646fdd4b-fphh4" Jan 24 00:56:41.652597 kubelet[2883]: I0124 00:56:41.619458 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q6tk\" (UniqueName: \"kubernetes.io/projected/de264b0a-b854-4ae8-ab32-25ff23b321a2-kube-api-access-2q6tk\") pod \"calico-typha-b646fdd4b-fphh4\" (UID: \"de264b0a-b854-4ae8-ab32-25ff23b321a2\") " pod="calico-system/calico-typha-b646fdd4b-fphh4" Jan 24 00:56:41.652915 kernel: audit: type=1300 audit(1769216201.579:544): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd136d0340 a2=0 a3=0 items=0 ppid=3048 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:41.611947 systemd[1]: Created slice kubepods-besteffort-podde264b0a_b854_4ae8_ab32_25ff23b321a2.slice - libcontainer container kubepods-besteffort-podde264b0a_b854_4ae8_ab32_25ff23b321a2.slice. Jan 24 00:56:41.579000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:41.676538 kernel: audit: type=1327 audit(1769216201.579:544): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:41.754000 audit[3318]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=3318 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:41.774376 kernel: audit: type=1325 audit(1769216201.754:545): table=filter:119 family=2 entries=22 op=nft_register_rule pid=3318 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:41.754000 audit[3318]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe7162c0e0 a2=0 a3=7ffe7162c0cc items=0 ppid=3048 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:41.822292 kernel: audit: type=1300 audit(1769216201.754:545): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe7162c0e0 a2=0 a3=7ffe7162c0cc items=0 ppid=3048 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:41.754000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:41.846177 kernel: audit: type=1327 audit(1769216201.754:545): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:41.822000 audit[3318]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=3318 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:41.822000 audit[3318]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe7162c0e0 a2=0 a3=0 items=0 ppid=3048 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:41.822000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:41.868011 kernel: audit: type=1325 audit(1769216201.822:546): table=nat:120 family=2 entries=12 op=nft_register_rule pid=3318 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:41.942630 kubelet[2883]: E0124 00:56:41.942418 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:41.944149 containerd[1609]: time="2026-01-24T00:56:41.944102280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b646fdd4b-fphh4,Uid:de264b0a-b854-4ae8-ab32-25ff23b321a2,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:42.023622 systemd[1]: Created slice kubepods-besteffort-pod668015f1_e045_42f3_bd23_25ab61ba72e7.slice - libcontainer container kubepods-besteffort-pod668015f1_e045_42f3_bd23_25ab61ba72e7.slice. Jan 24 00:56:42.029057 kubelet[2883]: I0124 00:56:42.025502 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/668015f1-e045-42f3-bd23-25ab61ba72e7-cni-net-dir\") pod \"calico-node-w4lfc\" (UID: \"668015f1-e045-42f3-bd23-25ab61ba72e7\") " pod="calico-system/calico-node-w4lfc" Jan 24 00:56:42.029057 kubelet[2883]: I0124 00:56:42.025537 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/668015f1-e045-42f3-bd23-25ab61ba72e7-lib-modules\") pod \"calico-node-w4lfc\" (UID: \"668015f1-e045-42f3-bd23-25ab61ba72e7\") " pod="calico-system/calico-node-w4lfc" Jan 24 00:56:42.029057 kubelet[2883]: I0124 00:56:42.025561 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/668015f1-e045-42f3-bd23-25ab61ba72e7-tigera-ca-bundle\") pod \"calico-node-w4lfc\" (UID: \"668015f1-e045-42f3-bd23-25ab61ba72e7\") " pod="calico-system/calico-node-w4lfc" Jan 24 00:56:42.029057 kubelet[2883]: I0124 00:56:42.025584 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/668015f1-e045-42f3-bd23-25ab61ba72e7-cni-log-dir\") pod \"calico-node-w4lfc\" (UID: \"668015f1-e045-42f3-bd23-25ab61ba72e7\") " pod="calico-system/calico-node-w4lfc" Jan 24 00:56:42.029057 kubelet[2883]: I0124 00:56:42.025607 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/668015f1-e045-42f3-bd23-25ab61ba72e7-policysync\") pod \"calico-node-w4lfc\" (UID: \"668015f1-e045-42f3-bd23-25ab61ba72e7\") " pod="calico-system/calico-node-w4lfc" Jan 24 00:56:42.032600 kubelet[2883]: I0124 00:56:42.025646 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/668015f1-e045-42f3-bd23-25ab61ba72e7-var-lib-calico\") pod \"calico-node-w4lfc\" (UID: \"668015f1-e045-42f3-bd23-25ab61ba72e7\") " pod="calico-system/calico-node-w4lfc" Jan 24 00:56:42.033161 kubelet[2883]: I0124 00:56:42.033000 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/668015f1-e045-42f3-bd23-25ab61ba72e7-var-run-calico\") pod \"calico-node-w4lfc\" (UID: \"668015f1-e045-42f3-bd23-25ab61ba72e7\") " pod="calico-system/calico-node-w4lfc" Jan 24 00:56:42.033161 kubelet[2883]: I0124 00:56:42.033043 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/668015f1-e045-42f3-bd23-25ab61ba72e7-xtables-lock\") pod \"calico-node-w4lfc\" (UID: \"668015f1-e045-42f3-bd23-25ab61ba72e7\") " pod="calico-system/calico-node-w4lfc" Jan 24 00:56:42.033161 kubelet[2883]: I0124 00:56:42.033072 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89mq5\" (UniqueName: \"kubernetes.io/projected/668015f1-e045-42f3-bd23-25ab61ba72e7-kube-api-access-89mq5\") pod \"calico-node-w4lfc\" (UID: \"668015f1-e045-42f3-bd23-25ab61ba72e7\") " pod="calico-system/calico-node-w4lfc" Jan 24 00:56:42.033161 kubelet[2883]: I0124 00:56:42.033100 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/668015f1-e045-42f3-bd23-25ab61ba72e7-cni-bin-dir\") pod \"calico-node-w4lfc\" (UID: \"668015f1-e045-42f3-bd23-25ab61ba72e7\") " pod="calico-system/calico-node-w4lfc" Jan 24 00:56:42.033161 kubelet[2883]: I0124 00:56:42.033136 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/668015f1-e045-42f3-bd23-25ab61ba72e7-flexvol-driver-host\") pod \"calico-node-w4lfc\" (UID: \"668015f1-e045-42f3-bd23-25ab61ba72e7\") " pod="calico-system/calico-node-w4lfc" Jan 24 00:56:42.033360 kubelet[2883]: I0124 00:56:42.033196 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/668015f1-e045-42f3-bd23-25ab61ba72e7-node-certs\") pod \"calico-node-w4lfc\" (UID: \"668015f1-e045-42f3-bd23-25ab61ba72e7\") " pod="calico-system/calico-node-w4lfc" Jan 24 00:56:42.157055 kubelet[2883]: E0124 00:56:42.156628 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.157055 kubelet[2883]: W0124 00:56:42.156955 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.157055 kubelet[2883]: E0124 00:56:42.156996 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.161362 kubelet[2883]: E0124 00:56:42.160840 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:56:42.164513 containerd[1609]: time="2026-01-24T00:56:42.161210001Z" level=info msg="connecting to shim 58a2087381c0884d3b4d3303a1cb8787068b7bb376721bbc31edfb7c650d6e7f" address="unix:///run/containerd/s/63d8aeeb72471f584830a5a7cb7902319fe0089f5f94755370fd24d4af53a787" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:56:42.181967 kubelet[2883]: E0124 00:56:42.181930 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.182253 kubelet[2883]: W0124 00:56:42.182224 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.184344 kubelet[2883]: E0124 00:56:42.184287 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.211459 kubelet[2883]: E0124 00:56:42.211138 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.211459 kubelet[2883]: W0124 00:56:42.211216 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.211459 kubelet[2883]: E0124 00:56:42.211244 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.240124 kubelet[2883]: E0124 00:56:42.239885 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.240124 kubelet[2883]: W0124 00:56:42.239921 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.240124 kubelet[2883]: E0124 00:56:42.239952 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.242802 kubelet[2883]: E0124 00:56:42.240562 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.242802 kubelet[2883]: W0124 00:56:42.241837 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.242802 kubelet[2883]: E0124 00:56:42.241872 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.253627 kubelet[2883]: E0124 00:56:42.251963 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.253627 kubelet[2883]: W0124 00:56:42.252042 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.253627 kubelet[2883]: E0124 00:56:42.252084 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.273242 kubelet[2883]: E0124 00:56:42.273165 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.273242 kubelet[2883]: W0124 00:56:42.273199 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.273242 kubelet[2883]: E0124 00:56:42.273226 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.279954 kubelet[2883]: E0124 00:56:42.278992 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.279954 kubelet[2883]: W0124 00:56:42.279019 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.279954 kubelet[2883]: E0124 00:56:42.279041 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.280099 kubelet[2883]: E0124 00:56:42.280073 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.280099 kubelet[2883]: W0124 00:56:42.280092 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.280181 kubelet[2883]: E0124 00:56:42.280165 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.289524 kubelet[2883]: E0124 00:56:42.286426 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.289524 kubelet[2883]: W0124 00:56:42.286453 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.289524 kubelet[2883]: E0124 00:56:42.286478 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.292250 kubelet[2883]: E0124 00:56:42.291965 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.292250 kubelet[2883]: W0124 00:56:42.292046 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.292250 kubelet[2883]: E0124 00:56:42.292068 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.298217 kubelet[2883]: E0124 00:56:42.298149 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.298217 kubelet[2883]: W0124 00:56:42.298185 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.298217 kubelet[2883]: E0124 00:56:42.298206 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.298606 kubelet[2883]: E0124 00:56:42.298504 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.298606 kubelet[2883]: W0124 00:56:42.298589 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.298606 kubelet[2883]: E0124 00:56:42.298607 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.300655 kubelet[2883]: E0124 00:56:42.299490 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.304459 kubelet[2883]: W0124 00:56:42.302110 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.304459 kubelet[2883]: E0124 00:56:42.302135 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.310961 kubelet[2883]: E0124 00:56:42.310925 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.311185 kubelet[2883]: W0124 00:56:42.311128 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.311185 kubelet[2883]: E0124 00:56:42.311155 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.319269 kubelet[2883]: E0124 00:56:42.318926 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.319269 kubelet[2883]: W0124 00:56:42.318947 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.319269 kubelet[2883]: E0124 00:56:42.318969 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.329100 kubelet[2883]: E0124 00:56:42.328848 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.329100 kubelet[2883]: W0124 00:56:42.328873 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.329100 kubelet[2883]: E0124 00:56:42.328893 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.332217 kubelet[2883]: E0124 00:56:42.332070 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.332862 kubelet[2883]: W0124 00:56:42.332092 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.332862 kubelet[2883]: E0124 00:56:42.332594 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.341111 kubelet[2883]: E0124 00:56:42.341086 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.341225 kubelet[2883]: W0124 00:56:42.341206 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.341346 kubelet[2883]: E0124 00:56:42.341325 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.342588 kubelet[2883]: E0124 00:56:42.342294 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.342588 kubelet[2883]: W0124 00:56:42.342311 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.342588 kubelet[2883]: E0124 00:56:42.342327 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.349341 kubelet[2883]: E0124 00:56:42.348964 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.368076 kubelet[2883]: W0124 00:56:42.367894 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.380208 kubelet[2883]: E0124 00:56:42.369233 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.380288 systemd[1]: Started cri-containerd-58a2087381c0884d3b4d3303a1cb8787068b7bb376721bbc31edfb7c650d6e7f.scope - libcontainer container 58a2087381c0884d3b4d3303a1cb8787068b7bb376721bbc31edfb7c650d6e7f. Jan 24 00:56:42.390110 kubelet[2883]: E0124 00:56:42.390089 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.390266 kubelet[2883]: W0124 00:56:42.390245 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.390534 kubelet[2883]: E0124 00:56:42.390509 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.393968 kubelet[2883]: E0124 00:56:42.393950 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.394448 kubelet[2883]: W0124 00:56:42.394430 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.394544 kubelet[2883]: E0124 00:56:42.394525 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.394910 kubelet[2883]: E0124 00:56:42.394888 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:42.400921 kubelet[2883]: E0124 00:56:42.396380 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.400921 kubelet[2883]: W0124 00:56:42.396445 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.400921 kubelet[2883]: E0124 00:56:42.396467 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.400921 kubelet[2883]: I0124 00:56:42.396495 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6e0379d-4209-43c1-9c94-53533c368367-kubelet-dir\") pod \"csi-node-driver-rkd9m\" (UID: \"e6e0379d-4209-43c1-9c94-53533c368367\") " pod="calico-system/csi-node-driver-rkd9m" Jan 24 00:56:42.401288 containerd[1609]: time="2026-01-24T00:56:42.401252398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w4lfc,Uid:668015f1-e045-42f3-bd23-25ab61ba72e7,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:42.409926 kubelet[2883]: E0124 00:56:42.409902 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.410058 kubelet[2883]: W0124 00:56:42.410037 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.410455 kubelet[2883]: E0124 00:56:42.410434 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.412090 kubelet[2883]: I0124 00:56:42.412063 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e6e0379d-4209-43c1-9c94-53533c368367-socket-dir\") pod \"csi-node-driver-rkd9m\" (UID: \"e6e0379d-4209-43c1-9c94-53533c368367\") " pod="calico-system/csi-node-driver-rkd9m" Jan 24 00:56:42.419347 kubelet[2883]: E0124 00:56:42.419038 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.419347 kubelet[2883]: W0124 00:56:42.419198 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.419347 kubelet[2883]: E0124 00:56:42.419221 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.424559 kubelet[2883]: E0124 00:56:42.424412 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.425073 kubelet[2883]: W0124 00:56:42.424642 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.425353 kubelet[2883]: E0124 00:56:42.425007 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.437613 kubelet[2883]: E0124 00:56:42.429209 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.437613 kubelet[2883]: W0124 00:56:42.429275 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.437613 kubelet[2883]: E0124 00:56:42.429294 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.437613 kubelet[2883]: I0124 00:56:42.429624 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh58h\" (UniqueName: \"kubernetes.io/projected/e6e0379d-4209-43c1-9c94-53533c368367-kube-api-access-mh58h\") pod \"csi-node-driver-rkd9m\" (UID: \"e6e0379d-4209-43c1-9c94-53533c368367\") " pod="calico-system/csi-node-driver-rkd9m" Jan 24 00:56:42.445890 kubelet[2883]: E0124 00:56:42.445278 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.445890 kubelet[2883]: W0124 00:56:42.445306 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.445890 kubelet[2883]: E0124 00:56:42.445331 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.448317 kubelet[2883]: E0124 00:56:42.448298 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.448523 kubelet[2883]: W0124 00:56:42.448505 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.448608 kubelet[2883]: E0124 00:56:42.448591 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.455984 kubelet[2883]: E0124 00:56:42.453389 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.455984 kubelet[2883]: W0124 00:56:42.453408 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.455984 kubelet[2883]: E0124 00:56:42.453427 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.455984 kubelet[2883]: I0124 00:56:42.453870 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e6e0379d-4209-43c1-9c94-53533c368367-varrun\") pod \"csi-node-driver-rkd9m\" (UID: \"e6e0379d-4209-43c1-9c94-53533c368367\") " pod="calico-system/csi-node-driver-rkd9m" Jan 24 00:56:42.455984 kubelet[2883]: E0124 00:56:42.453982 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.455984 kubelet[2883]: W0124 00:56:42.453995 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.455984 kubelet[2883]: E0124 00:56:42.454010 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.459807 kubelet[2883]: E0124 00:56:42.457013 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.459807 kubelet[2883]: W0124 00:56:42.457092 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.459807 kubelet[2883]: E0124 00:56:42.457111 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.461320 kubelet[2883]: I0124 00:56:42.461232 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e6e0379d-4209-43c1-9c94-53533c368367-registration-dir\") pod \"csi-node-driver-rkd9m\" (UID: \"e6e0379d-4209-43c1-9c94-53533c368367\") " pod="calico-system/csi-node-driver-rkd9m" Jan 24 00:56:42.463623 kubelet[2883]: E0124 00:56:42.463009 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.463623 kubelet[2883]: W0124 00:56:42.463373 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.463623 kubelet[2883]: E0124 00:56:42.463394 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.484209 kubelet[2883]: E0124 00:56:42.484134 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.484209 kubelet[2883]: W0124 00:56:42.484167 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.484209 kubelet[2883]: E0124 00:56:42.484197 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.488220 kubelet[2883]: E0124 00:56:42.488173 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.488220 kubelet[2883]: W0124 00:56:42.488197 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.488220 kubelet[2883]: E0124 00:56:42.488220 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.496229 kubelet[2883]: E0124 00:56:42.496039 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.496229 kubelet[2883]: W0124 00:56:42.496125 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.496229 kubelet[2883]: E0124 00:56:42.496154 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.497818 kubelet[2883]: E0124 00:56:42.496534 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.497818 kubelet[2883]: W0124 00:56:42.496607 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.497818 kubelet[2883]: E0124 00:56:42.496625 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.550138 containerd[1609]: time="2026-01-24T00:56:42.547338644Z" level=info msg="connecting to shim 2112e1ce16d0b72519c1b74279a3be569e18320c62df6fa82870bb19742ad179" address="unix:///run/containerd/s/24c0499cea0fa9297b67d71a4cd3f76c692b5592b74a9394f539c67345f70dd4" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:56:42.577000 audit: BPF prog-id=151 op=LOAD Jan 24 00:56:42.578000 audit: BPF prog-id=152 op=LOAD Jan 24 00:56:42.578000 audit[3344]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00023c238 a2=98 a3=0 items=0 ppid=3329 pid=3344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:42.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538613230383733383163303838346433623464333330336131636238 Jan 24 00:56:42.579000 audit: BPF prog-id=152 op=UNLOAD Jan 24 00:56:42.579000 audit[3344]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3329 pid=3344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:42.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538613230383733383163303838346433623464333330336131636238 Jan 24 00:56:42.579000 audit: BPF prog-id=153 op=LOAD Jan 24 00:56:42.579000 audit[3344]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00023c488 a2=98 a3=0 items=0 ppid=3329 pid=3344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:42.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538613230383733383163303838346433623464333330336131636238 Jan 24 00:56:42.580000 audit: BPF prog-id=154 op=LOAD Jan 24 00:56:42.580000 audit[3344]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00023c218 a2=98 a3=0 items=0 ppid=3329 pid=3344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:42.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538613230383733383163303838346433623464333330336131636238 Jan 24 00:56:42.580000 audit: BPF prog-id=154 op=UNLOAD Jan 24 00:56:42.580000 audit[3344]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3329 pid=3344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:42.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538613230383733383163303838346433623464333330336131636238 Jan 24 00:56:42.580000 audit: BPF prog-id=153 op=UNLOAD Jan 24 00:56:42.580000 audit[3344]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3329 pid=3344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:42.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538613230383733383163303838346433623464333330336131636238 Jan 24 00:56:42.581000 audit: BPF prog-id=155 op=LOAD Jan 24 00:56:42.581000 audit[3344]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00023c6e8 a2=98 a3=0 items=0 ppid=3329 pid=3344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:42.581000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538613230383733383163303838346433623464333330336131636238 Jan 24 00:56:42.600344 kubelet[2883]: E0124 00:56:42.600313 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.600966 kubelet[2883]: W0124 00:56:42.600942 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.605140 kubelet[2883]: E0124 00:56:42.601173 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.607372 kubelet[2883]: E0124 00:56:42.607307 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.607372 kubelet[2883]: W0124 00:56:42.607328 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.607372 kubelet[2883]: E0124 00:56:42.607349 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.610323 kubelet[2883]: E0124 00:56:42.610242 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.610323 kubelet[2883]: W0124 00:56:42.610274 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.610323 kubelet[2883]: E0124 00:56:42.610299 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.612319 kubelet[2883]: E0124 00:56:42.612117 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.612319 kubelet[2883]: W0124 00:56:42.612135 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.612319 kubelet[2883]: E0124 00:56:42.612153 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.614552 kubelet[2883]: E0124 00:56:42.614533 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.614663 kubelet[2883]: W0124 00:56:42.614645 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.615003 kubelet[2883]: E0124 00:56:42.614898 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.615872 kubelet[2883]: E0124 00:56:42.615855 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.615996 kubelet[2883]: W0124 00:56:42.615953 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.615996 kubelet[2883]: E0124 00:56:42.615977 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.620041 kubelet[2883]: E0124 00:56:42.619893 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.620594 kubelet[2883]: W0124 00:56:42.620374 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.622414 kubelet[2883]: E0124 00:56:42.622279 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.637596 kubelet[2883]: E0124 00:56:42.637116 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.637596 kubelet[2883]: W0124 00:56:42.637142 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.637596 kubelet[2883]: E0124 00:56:42.637166 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.639117 kubelet[2883]: E0124 00:56:42.639100 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.639202 kubelet[2883]: W0124 00:56:42.639186 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.639283 kubelet[2883]: E0124 00:56:42.639267 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.644140 kubelet[2883]: E0124 00:56:42.644116 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.645184 kubelet[2883]: W0124 00:56:42.645010 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.645184 kubelet[2883]: E0124 00:56:42.645043 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.648837 kubelet[2883]: E0124 00:56:42.648818 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.648923 kubelet[2883]: W0124 00:56:42.648903 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.648997 kubelet[2883]: E0124 00:56:42.648982 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.655950 kubelet[2883]: E0124 00:56:42.650813 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.655950 kubelet[2883]: W0124 00:56:42.650827 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.655950 kubelet[2883]: E0124 00:56:42.650843 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.660836 kubelet[2883]: E0124 00:56:42.659285 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.660836 kubelet[2883]: W0124 00:56:42.659307 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.660836 kubelet[2883]: E0124 00:56:42.659325 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.694983 kubelet[2883]: E0124 00:56:42.681103 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.694983 kubelet[2883]: W0124 00:56:42.688172 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.694983 kubelet[2883]: E0124 00:56:42.688509 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.694983 kubelet[2883]: E0124 00:56:42.690166 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.694983 kubelet[2883]: W0124 00:56:42.690185 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.694983 kubelet[2883]: E0124 00:56:42.690206 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.694983 kubelet[2883]: E0124 00:56:42.692598 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.694983 kubelet[2883]: W0124 00:56:42.692833 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.694983 kubelet[2883]: E0124 00:56:42.692858 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.699815 kubelet[2883]: E0124 00:56:42.698222 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.699815 kubelet[2883]: W0124 00:56:42.698284 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.699815 kubelet[2883]: E0124 00:56:42.698304 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.703075 kubelet[2883]: E0124 00:56:42.702903 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.703075 kubelet[2883]: W0124 00:56:42.702926 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.703075 kubelet[2883]: E0124 00:56:42.702944 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.708944 kubelet[2883]: E0124 00:56:42.707364 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.708944 kubelet[2883]: W0124 00:56:42.707531 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.708944 kubelet[2883]: E0124 00:56:42.707553 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.717913 kubelet[2883]: E0124 00:56:42.717583 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.717913 kubelet[2883]: W0124 00:56:42.717604 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.717913 kubelet[2883]: E0124 00:56:42.717626 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.725137 kubelet[2883]: E0124 00:56:42.724905 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.725137 kubelet[2883]: W0124 00:56:42.724971 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.725137 kubelet[2883]: E0124 00:56:42.724996 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.753988 kubelet[2883]: E0124 00:56:42.748223 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.753988 kubelet[2883]: W0124 00:56:42.748246 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.753988 kubelet[2883]: E0124 00:56:42.748270 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.757873 kubelet[2883]: E0124 00:56:42.757565 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.757873 kubelet[2883]: W0124 00:56:42.757640 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.757873 kubelet[2883]: E0124 00:56:42.757667 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.760988 systemd[1]: Started cri-containerd-2112e1ce16d0b72519c1b74279a3be569e18320c62df6fa82870bb19742ad179.scope - libcontainer container 2112e1ce16d0b72519c1b74279a3be569e18320c62df6fa82870bb19742ad179. Jan 24 00:56:42.768028 kubelet[2883]: E0124 00:56:42.766656 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.768028 kubelet[2883]: W0124 00:56:42.766673 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.768028 kubelet[2883]: E0124 00:56:42.766818 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.768028 kubelet[2883]: E0124 00:56:42.767482 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.768028 kubelet[2883]: W0124 00:56:42.767495 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.768028 kubelet[2883]: E0124 00:56:42.767511 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.794869 kubelet[2883]: E0124 00:56:42.792122 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:42.794869 kubelet[2883]: W0124 00:56:42.792153 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:42.794869 kubelet[2883]: E0124 00:56:42.792180 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:42.828914 containerd[1609]: time="2026-01-24T00:56:42.828577494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b646fdd4b-fphh4,Uid:de264b0a-b854-4ae8-ab32-25ff23b321a2,Namespace:calico-system,Attempt:0,} returns sandbox id \"58a2087381c0884d3b4d3303a1cb8787068b7bb376721bbc31edfb7c650d6e7f\"" Jan 24 00:56:42.856000 audit: BPF prog-id=156 op=LOAD Jan 24 00:56:42.857000 audit: BPF prog-id=157 op=LOAD Jan 24 00:56:42.857000 audit[3429]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3413 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:42.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231313265316365313664306237323531396331623734323739613362 Jan 24 00:56:42.857000 audit: BPF prog-id=157 op=UNLOAD Jan 24 00:56:42.857000 audit[3429]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3413 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:42.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231313265316365313664306237323531396331623734323739613362 Jan 24 00:56:42.859545 kubelet[2883]: E0124 00:56:42.857013 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:42.859612 containerd[1609]: time="2026-01-24T00:56:42.858872567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 00:56:42.864000 audit: BPF prog-id=158 op=LOAD Jan 24 00:56:42.864000 audit[3429]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3413 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:42.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231313265316365313664306237323531396331623734323739613362 Jan 24 00:56:42.864000 audit: BPF prog-id=159 op=LOAD Jan 24 00:56:42.864000 audit[3429]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3413 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:42.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231313265316365313664306237323531396331623734323739613362 Jan 24 00:56:42.864000 audit: BPF prog-id=159 op=UNLOAD Jan 24 00:56:42.864000 audit[3429]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3413 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:42.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231313265316365313664306237323531396331623734323739613362 Jan 24 00:56:42.864000 audit: BPF prog-id=158 op=UNLOAD Jan 24 00:56:42.864000 audit[3429]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3413 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:42.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231313265316365313664306237323531396331623734323739613362 Jan 24 00:56:42.864000 audit: BPF prog-id=160 op=LOAD Jan 24 00:56:42.864000 audit[3429]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3413 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:42.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231313265316365313664306237323531396331623734323739613362 Jan 24 00:56:42.926000 audit[3479]: NETFILTER_CFG table=filter:121 family=2 entries=22 op=nft_register_rule pid=3479 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:42.926000 audit[3479]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffccc67dab0 a2=0 a3=7ffccc67da9c items=0 ppid=3048 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:42.926000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:42.940000 audit[3479]: NETFILTER_CFG table=nat:122 family=2 entries=12 op=nft_register_rule pid=3479 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:42.940000 audit[3479]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffccc67dab0 a2=0 a3=0 items=0 ppid=3048 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:42.940000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:42.972393 containerd[1609]: time="2026-01-24T00:56:42.971512438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w4lfc,Uid:668015f1-e045-42f3-bd23-25ab61ba72e7,Namespace:calico-system,Attempt:0,} returns sandbox id \"2112e1ce16d0b72519c1b74279a3be569e18320c62df6fa82870bb19742ad179\"" Jan 24 00:56:42.974394 kubelet[2883]: E0124 00:56:42.974182 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:43.977475 kubelet[2883]: E0124 00:56:43.975792 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:56:44.026154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount127810664.mount: Deactivated successfully. Jan 24 00:56:45.969087 kubelet[2883]: E0124 00:56:45.966322 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:56:47.976084 kubelet[2883]: E0124 00:56:47.973441 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:56:49.250499 containerd[1609]: time="2026-01-24T00:56:49.248990245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:49.255079 containerd[1609]: time="2026-01-24T00:56:49.255030560Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 24 00:56:49.259951 containerd[1609]: time="2026-01-24T00:56:49.259908715Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:49.272347 containerd[1609]: time="2026-01-24T00:56:49.272274696Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:49.272837 containerd[1609]: time="2026-01-24T00:56:49.272582466Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 6.413665667s" Jan 24 00:56:49.272837 containerd[1609]: time="2026-01-24T00:56:49.272621418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 00:56:49.287907 containerd[1609]: time="2026-01-24T00:56:49.286018196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 00:56:49.446825 containerd[1609]: time="2026-01-24T00:56:49.446140204Z" level=info msg="CreateContainer within sandbox \"58a2087381c0884d3b4d3303a1cb8787068b7bb376721bbc31edfb7c650d6e7f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 00:56:49.548426 containerd[1609]: time="2026-01-24T00:56:49.547562981Z" level=info msg="Container d5b161374dd0270f95d00c80bbddbd4cbf3eb8c15fbe50498a6705900b26173c: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:56:49.595869 containerd[1609]: time="2026-01-24T00:56:49.594230280Z" level=info msg="CreateContainer within sandbox \"58a2087381c0884d3b4d3303a1cb8787068b7bb376721bbc31edfb7c650d6e7f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d5b161374dd0270f95d00c80bbddbd4cbf3eb8c15fbe50498a6705900b26173c\"" Jan 24 00:56:49.598581 containerd[1609]: time="2026-01-24T00:56:49.598549443Z" level=info msg="StartContainer for \"d5b161374dd0270f95d00c80bbddbd4cbf3eb8c15fbe50498a6705900b26173c\"" Jan 24 00:56:49.609359 containerd[1609]: time="2026-01-24T00:56:49.609251488Z" level=info msg="connecting to shim d5b161374dd0270f95d00c80bbddbd4cbf3eb8c15fbe50498a6705900b26173c" address="unix:///run/containerd/s/63d8aeeb72471f584830a5a7cb7902319fe0089f5f94755370fd24d4af53a787" protocol=ttrpc version=3 Jan 24 00:56:49.753638 systemd[1]: Started cri-containerd-d5b161374dd0270f95d00c80bbddbd4cbf3eb8c15fbe50498a6705900b26173c.scope - libcontainer container d5b161374dd0270f95d00c80bbddbd4cbf3eb8c15fbe50498a6705900b26173c. Jan 24 00:56:49.816604 kernel: kauditd_printk_skb: 52 callbacks suppressed Jan 24 00:56:49.816947 kernel: audit: type=1334 audit(1769216209.810:565): prog-id=161 op=LOAD Jan 24 00:56:49.810000 audit: BPF prog-id=161 op=LOAD Jan 24 00:56:49.825312 kernel: audit: type=1334 audit(1769216209.814:566): prog-id=162 op=LOAD Jan 24 00:56:49.814000 audit: BPF prog-id=162 op=LOAD Jan 24 00:56:49.814000 audit[3496]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3329 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:49.868253 kernel: audit: type=1300 audit(1769216209.814:566): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3329 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:49.872255 kernel: audit: type=1327 audit(1769216209.814:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435623136313337346464303237306639356430306338306262646462 Jan 24 00:56:49.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435623136313337346464303237306639356430306338306262646462 Jan 24 00:56:49.814000 audit: BPF prog-id=162 op=UNLOAD Jan 24 00:56:49.814000 audit[3496]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3329 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:49.961955 kernel: audit: type=1334 audit(1769216209.814:567): prog-id=162 op=UNLOAD Jan 24 00:56:49.962085 kernel: audit: type=1300 audit(1769216209.814:567): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3329 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:49.962124 kernel: audit: type=1327 audit(1769216209.814:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435623136313337346464303237306639356430306338306262646462 Jan 24 00:56:49.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435623136313337346464303237306639356430306338306262646462 Jan 24 00:56:49.986251 kubelet[2883]: E0124 00:56:49.984071 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:56:49.814000 audit: BPF prog-id=163 op=LOAD Jan 24 00:56:50.039199 kernel: audit: type=1334 audit(1769216209.814:568): prog-id=163 op=LOAD Jan 24 00:56:50.039348 kernel: audit: type=1300 audit(1769216209.814:568): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3329 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:49.814000 audit[3496]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3329 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:49.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435623136313337346464303237306639356430306338306262646462 Jan 24 00:56:50.063136 containerd[1609]: time="2026-01-24T00:56:50.062854618Z" level=info msg="StartContainer for \"d5b161374dd0270f95d00c80bbddbd4cbf3eb8c15fbe50498a6705900b26173c\" returns successfully" Jan 24 00:56:50.067329 kernel: audit: type=1327 audit(1769216209.814:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435623136313337346464303237306639356430306338306262646462 Jan 24 00:56:49.814000 audit: BPF prog-id=164 op=LOAD Jan 24 00:56:49.814000 audit[3496]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3329 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:49.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435623136313337346464303237306639356430306338306262646462 Jan 24 00:56:49.814000 audit: BPF prog-id=164 op=UNLOAD Jan 24 00:56:49.814000 audit[3496]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3329 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:49.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435623136313337346464303237306639356430306338306262646462 Jan 24 00:56:49.814000 audit: BPF prog-id=163 op=UNLOAD Jan 24 00:56:49.814000 audit[3496]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3329 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:49.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435623136313337346464303237306639356430306338306262646462 Jan 24 00:56:49.814000 audit: BPF prog-id=165 op=LOAD Jan 24 00:56:49.814000 audit[3496]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3329 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:49.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435623136313337346464303237306639356430306338306262646462 Jan 24 00:56:50.897927 containerd[1609]: time="2026-01-24T00:56:50.895638924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:50.914330 containerd[1609]: time="2026-01-24T00:56:50.914016409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 24 00:56:50.926164 containerd[1609]: time="2026-01-24T00:56:50.921651541Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:50.946069 containerd[1609]: time="2026-01-24T00:56:50.944378295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:50.955223 containerd[1609]: time="2026-01-24T00:56:50.955114621Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.668108016s" Jan 24 00:56:50.955367 containerd[1609]: time="2026-01-24T00:56:50.955219275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 00:56:51.047134 containerd[1609]: time="2026-01-24T00:56:51.043892246Z" level=info msg="CreateContainer within sandbox \"2112e1ce16d0b72519c1b74279a3be569e18320c62df6fa82870bb19742ad179\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 00:56:51.063641 kubelet[2883]: E0124 00:56:51.063397 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:51.099618 kubelet[2883]: E0124 00:56:51.099324 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.099618 kubelet[2883]: W0124 00:56:51.099355 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.099618 kubelet[2883]: E0124 00:56:51.099381 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.100349 kubelet[2883]: E0124 00:56:51.100267 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.102901 kubelet[2883]: W0124 00:56:51.100957 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.103038 kubelet[2883]: E0124 00:56:51.103016 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.105063 kubelet[2883]: E0124 00:56:51.105041 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.105161 kubelet[2883]: W0124 00:56:51.105144 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.105256 kubelet[2883]: E0124 00:56:51.105237 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.106052 kubelet[2883]: E0124 00:56:51.106036 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.106397 kubelet[2883]: W0124 00:56:51.106130 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.106397 kubelet[2883]: E0124 00:56:51.106151 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.108116 kubelet[2883]: E0124 00:56:51.108059 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.108180 kubelet[2883]: W0124 00:56:51.108168 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.108276 kubelet[2883]: E0124 00:56:51.108258 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.109078 kubelet[2883]: E0124 00:56:51.109060 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.109347 kubelet[2883]: W0124 00:56:51.109329 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.109430 kubelet[2883]: E0124 00:56:51.109411 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.116216 kubelet[2883]: E0124 00:56:51.115979 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.116216 kubelet[2883]: W0124 00:56:51.116003 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.116216 kubelet[2883]: E0124 00:56:51.116022 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.119375 kubelet[2883]: E0124 00:56:51.119355 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.121568 kubelet[2883]: W0124 00:56:51.119816 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.121568 kubelet[2883]: E0124 00:56:51.119845 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.123877 kubelet[2883]: E0124 00:56:51.123639 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.139215 containerd[1609]: time="2026-01-24T00:56:51.129222025Z" level=info msg="Container 892dddbae052f7e80bdbd418448ba8c1cb41c4d1e9d0a5d67410760838a5649b: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:56:51.134643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3654142217.mount: Deactivated successfully. Jan 24 00:56:51.141829 kubelet[2883]: W0124 00:56:51.141651 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.142159 kubelet[2883]: E0124 00:56:51.142132 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.144458 kubelet[2883]: E0124 00:56:51.144420 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.144577 kubelet[2883]: W0124 00:56:51.144557 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.145841 kubelet[2883]: E0124 00:56:51.144647 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.152571 kubelet[2883]: E0124 00:56:51.151425 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.152936 kubelet[2883]: W0124 00:56:51.152913 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.153035 kubelet[2883]: E0124 00:56:51.153017 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.156312 kubelet[2883]: E0124 00:56:51.156294 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.156886 kubelet[2883]: W0124 00:56:51.156863 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.157346 kubelet[2883]: E0124 00:56:51.157324 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.159009 kubelet[2883]: E0124 00:56:51.158931 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.159009 kubelet[2883]: W0124 00:56:51.159006 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.168156 kubelet[2883]: E0124 00:56:51.159028 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.168156 kubelet[2883]: E0124 00:56:51.159360 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.168156 kubelet[2883]: W0124 00:56:51.159375 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.168156 kubelet[2883]: E0124 00:56:51.159388 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.174630 kubelet[2883]: E0124 00:56:51.168886 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.174630 kubelet[2883]: W0124 00:56:51.168910 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.174630 kubelet[2883]: E0124 00:56:51.168933 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.201961 kubelet[2883]: E0124 00:56:51.190106 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.201961 kubelet[2883]: W0124 00:56:51.190133 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.201961 kubelet[2883]: E0124 00:56:51.190163 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.201961 kubelet[2883]: E0124 00:56:51.194125 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.201961 kubelet[2883]: W0124 00:56:51.194146 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.201961 kubelet[2883]: E0124 00:56:51.194171 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.201961 kubelet[2883]: E0124 00:56:51.197399 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.201961 kubelet[2883]: W0124 00:56:51.197417 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.201961 kubelet[2883]: E0124 00:56:51.197440 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.212036 kubelet[2883]: E0124 00:56:51.211239 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.212036 kubelet[2883]: W0124 00:56:51.211271 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.212036 kubelet[2883]: E0124 00:56:51.211306 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.224555 kubelet[2883]: E0124 00:56:51.222538 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.224555 kubelet[2883]: W0124 00:56:51.222560 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.224555 kubelet[2883]: E0124 00:56:51.222583 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.233154 kubelet[2883]: E0124 00:56:51.229644 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.233154 kubelet[2883]: W0124 00:56:51.230161 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.233154 kubelet[2883]: E0124 00:56:51.230187 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.233154 kubelet[2883]: E0124 00:56:51.232641 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.235954 kubelet[2883]: W0124 00:56:51.235566 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.235954 kubelet[2883]: E0124 00:56:51.235842 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.240274 kubelet[2883]: E0124 00:56:51.240047 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.240274 kubelet[2883]: W0124 00:56:51.240067 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.240274 kubelet[2883]: E0124 00:56:51.240086 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.245962 containerd[1609]: time="2026-01-24T00:56:51.245576483Z" level=info msg="CreateContainer within sandbox \"2112e1ce16d0b72519c1b74279a3be569e18320c62df6fa82870bb19742ad179\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"892dddbae052f7e80bdbd418448ba8c1cb41c4d1e9d0a5d67410760838a5649b\"" Jan 24 00:56:51.249594 kubelet[2883]: E0124 00:56:51.249365 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.249594 kubelet[2883]: W0124 00:56:51.249390 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.249594 kubelet[2883]: E0124 00:56:51.249411 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.251429 kubelet[2883]: E0124 00:56:51.251062 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.251429 kubelet[2883]: W0124 00:56:51.251139 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.251429 kubelet[2883]: E0124 00:56:51.251159 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.254055 kubelet[2883]: E0124 00:56:51.252230 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.254055 kubelet[2883]: W0124 00:56:51.252243 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.254055 kubelet[2883]: E0124 00:56:51.252257 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.254181 containerd[1609]: time="2026-01-24T00:56:51.253861991Z" level=info msg="StartContainer for \"892dddbae052f7e80bdbd418448ba8c1cb41c4d1e9d0a5d67410760838a5649b\"" Jan 24 00:56:51.258267 kubelet[2883]: E0124 00:56:51.258245 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.258577 kubelet[2883]: W0124 00:56:51.258351 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.258577 kubelet[2883]: E0124 00:56:51.258416 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.258948 kubelet[2883]: E0124 00:56:51.258930 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.259041 kubelet[2883]: W0124 00:56:51.259024 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.259131 kubelet[2883]: E0124 00:56:51.259114 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.261269 kubelet[2883]: E0124 00:56:51.261250 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.261364 kubelet[2883]: W0124 00:56:51.261347 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.261450 kubelet[2883]: E0124 00:56:51.261432 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.264224 kubelet[2883]: E0124 00:56:51.264206 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.264322 kubelet[2883]: W0124 00:56:51.264305 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.264407 kubelet[2883]: E0124 00:56:51.264392 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.268559 containerd[1609]: time="2026-01-24T00:56:51.266978376Z" level=info msg="connecting to shim 892dddbae052f7e80bdbd418448ba8c1cb41c4d1e9d0a5d67410760838a5649b" address="unix:///run/containerd/s/24c0499cea0fa9297b67d71a4cd3f76c692b5592b74a9394f539c67345f70dd4" protocol=ttrpc version=3 Jan 24 00:56:51.273806 kubelet[2883]: E0124 00:56:51.271998 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.273806 kubelet[2883]: W0124 00:56:51.272019 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.273806 kubelet[2883]: E0124 00:56:51.272037 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.273806 kubelet[2883]: E0124 00:56:51.272612 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.273806 kubelet[2883]: W0124 00:56:51.272625 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.273806 kubelet[2883]: E0124 00:56:51.272640 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.280048 kubelet[2883]: E0124 00:56:51.280027 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:51.280145 kubelet[2883]: W0124 00:56:51.280129 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:51.280242 kubelet[2883]: E0124 00:56:51.280225 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.303007 kubelet[2883]: I0124 00:56:51.302158 2883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b646fdd4b-fphh4" podStartSLOduration=3.877942895 podStartE2EDuration="10.302141174s" podCreationTimestamp="2026-01-24 00:56:41 +0000 UTC" firstStartedPulling="2026-01-24 00:56:42.858431287 +0000 UTC m=+39.607133142" lastFinishedPulling="2026-01-24 00:56:49.282629566 +0000 UTC m=+46.031331421" observedRunningTime="2026-01-24 00:56:51.301423455 +0000 UTC m=+48.050125330" watchObservedRunningTime="2026-01-24 00:56:51.302141174 +0000 UTC m=+48.050843039" Jan 24 00:56:51.459000 audit[3590]: NETFILTER_CFG table=filter:123 family=2 entries=21 op=nft_register_rule pid=3590 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:51.459000 audit[3590]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd510baf40 a2=0 a3=7ffd510baf2c items=0 ppid=3048 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.459000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:51.492000 audit[3590]: NETFILTER_CFG table=nat:124 family=2 entries=19 op=nft_register_chain pid=3590 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:51.492000 audit[3590]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd510baf40 a2=0 a3=7ffd510baf2c items=0 ppid=3048 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.492000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:51.515049 systemd[1]: Started cri-containerd-892dddbae052f7e80bdbd418448ba8c1cb41c4d1e9d0a5d67410760838a5649b.scope - libcontainer container 892dddbae052f7e80bdbd418448ba8c1cb41c4d1e9d0a5d67410760838a5649b. Jan 24 00:56:51.863000 audit: BPF prog-id=166 op=LOAD Jan 24 00:56:51.863000 audit[3575]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3413 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839326464646261653035326637653830626462643431383434386261 Jan 24 00:56:51.863000 audit: BPF prog-id=167 op=LOAD Jan 24 00:56:51.863000 audit[3575]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3413 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839326464646261653035326637653830626462643431383434386261 Jan 24 00:56:51.863000 audit: BPF prog-id=167 op=UNLOAD Jan 24 00:56:51.863000 audit[3575]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3413 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839326464646261653035326637653830626462643431383434386261 Jan 24 00:56:51.863000 audit: BPF prog-id=166 op=UNLOAD Jan 24 00:56:51.863000 audit[3575]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3413 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839326464646261653035326637653830626462643431383434386261 Jan 24 00:56:51.863000 audit: BPF prog-id=168 op=LOAD Jan 24 00:56:51.863000 audit[3575]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3413 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839326464646261653035326637653830626462643431383434386261 Jan 24 00:56:51.965855 kubelet[2883]: E0124 00:56:51.965597 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:56:52.034301 kubelet[2883]: E0124 00:56:52.034264 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:52.090238 containerd[1609]: time="2026-01-24T00:56:52.089098085Z" level=info msg="StartContainer for \"892dddbae052f7e80bdbd418448ba8c1cb41c4d1e9d0a5d67410760838a5649b\" returns successfully" Jan 24 00:56:52.117180 kubelet[2883]: E0124 00:56:52.116834 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:52.117180 kubelet[2883]: W0124 00:56:52.116882 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:52.117180 kubelet[2883]: E0124 00:56:52.116914 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:52.120879 kubelet[2883]: E0124 00:56:52.120207 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:52.120879 kubelet[2883]: W0124 00:56:52.120240 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:52.120879 kubelet[2883]: E0124 00:56:52.120267 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:52.120879 kubelet[2883]: E0124 00:56:52.120838 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:52.120879 kubelet[2883]: W0124 00:56:52.120852 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:52.120879 kubelet[2883]: E0124 00:56:52.120867 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:52.131259 kubelet[2883]: E0124 00:56:52.128963 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:52.131259 kubelet[2883]: W0124 00:56:52.128978 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:52.131259 kubelet[2883]: E0124 00:56:52.128991 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:52.132959 kubelet[2883]: E0124 00:56:52.132927 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:52.133329 kubelet[2883]: W0124 00:56:52.133137 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:52.133329 kubelet[2883]: E0124 00:56:52.133160 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:52.137943 kubelet[2883]: E0124 00:56:52.137924 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:52.138101 kubelet[2883]: W0124 00:56:52.138025 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:52.138101 kubelet[2883]: E0124 00:56:52.138043 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:52.139054 kubelet[2883]: E0124 00:56:52.138455 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:52.139054 kubelet[2883]: W0124 00:56:52.138515 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:52.139054 kubelet[2883]: E0124 00:56:52.138531 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:52.149334 systemd[1]: cri-containerd-892dddbae052f7e80bdbd418448ba8c1cb41c4d1e9d0a5d67410760838a5649b.scope: Deactivated successfully. Jan 24 00:56:52.160867 containerd[1609]: time="2026-01-24T00:56:52.159988297Z" level=info msg="received container exit event container_id:\"892dddbae052f7e80bdbd418448ba8c1cb41c4d1e9d0a5d67410760838a5649b\" id:\"892dddbae052f7e80bdbd418448ba8c1cb41c4d1e9d0a5d67410760838a5649b\" pid:3592 exited_at:{seconds:1769216212 nanos:158426652}" Jan 24 00:56:52.169000 audit: BPF prog-id=168 op=UNLOAD Jan 24 00:56:52.317597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-892dddbae052f7e80bdbd418448ba8c1cb41c4d1e9d0a5d67410760838a5649b-rootfs.mount: Deactivated successfully. Jan 24 00:56:53.121544 kubelet[2883]: E0124 00:56:53.121504 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:53.175224 kubelet[2883]: E0124 00:56:53.121535 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:53.175411 containerd[1609]: time="2026-01-24T00:56:53.165955748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 00:56:53.972876 kubelet[2883]: E0124 00:56:53.971224 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:56:55.977337 kubelet[2883]: E0124 00:56:55.975960 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:56:58.000877 kubelet[2883]: E0124 00:56:58.000025 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:56:59.965827 kubelet[2883]: E0124 00:56:59.965477 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:57:01.979828 kubelet[2883]: E0124 00:57:01.979542 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:57:03.974007 kubelet[2883]: E0124 00:57:03.970536 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:57:04.494379 containerd[1609]: time="2026-01-24T00:57:04.493929444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:04.498211 containerd[1609]: time="2026-01-24T00:57:04.497970577Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 24 00:57:04.503602 containerd[1609]: time="2026-01-24T00:57:04.503399481Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:04.519877 containerd[1609]: time="2026-01-24T00:57:04.519226017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:04.521489 containerd[1609]: time="2026-01-24T00:57:04.520956877Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 11.354950274s" Jan 24 00:57:04.521489 containerd[1609]: time="2026-01-24T00:57:04.521168750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 00:57:04.544579 containerd[1609]: time="2026-01-24T00:57:04.544325912Z" level=info msg="CreateContainer within sandbox \"2112e1ce16d0b72519c1b74279a3be569e18320c62df6fa82870bb19742ad179\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:57:04.582617 containerd[1609]: time="2026-01-24T00:57:04.582472367Z" level=info msg="Container 692ce1a6fa3fe01bd406e64c43a9a5a8014a3b7d1e13a2bc5ff7438c315cc2f4: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:57:04.636999 containerd[1609]: time="2026-01-24T00:57:04.636541233Z" level=info msg="CreateContainer within sandbox \"2112e1ce16d0b72519c1b74279a3be569e18320c62df6fa82870bb19742ad179\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"692ce1a6fa3fe01bd406e64c43a9a5a8014a3b7d1e13a2bc5ff7438c315cc2f4\"" Jan 24 00:57:04.639691 containerd[1609]: time="2026-01-24T00:57:04.639555922Z" level=info msg="StartContainer for \"692ce1a6fa3fe01bd406e64c43a9a5a8014a3b7d1e13a2bc5ff7438c315cc2f4\"" Jan 24 00:57:04.646276 containerd[1609]: time="2026-01-24T00:57:04.645260511Z" level=info msg="connecting to shim 692ce1a6fa3fe01bd406e64c43a9a5a8014a3b7d1e13a2bc5ff7438c315cc2f4" address="unix:///run/containerd/s/24c0499cea0fa9297b67d71a4cd3f76c692b5592b74a9394f539c67345f70dd4" protocol=ttrpc version=3 Jan 24 00:57:04.744343 systemd[1]: Started cri-containerd-692ce1a6fa3fe01bd406e64c43a9a5a8014a3b7d1e13a2bc5ff7438c315cc2f4.scope - libcontainer container 692ce1a6fa3fe01bd406e64c43a9a5a8014a3b7d1e13a2bc5ff7438c315cc2f4. Jan 24 00:57:04.909000 audit: BPF prog-id=169 op=LOAD Jan 24 00:57:04.921885 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 24 00:57:04.922008 kernel: audit: type=1334 audit(1769216224.909:581): prog-id=169 op=LOAD Jan 24 00:57:04.909000 audit[3652]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3413 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:04.955685 kernel: audit: type=1300 audit(1769216224.909:581): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3413 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:04.970080 kernel: audit: type=1327 audit(1769216224.909:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639326365316136666133666530316264343036653634633433613961 Jan 24 00:57:04.909000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639326365316136666133666530316264343036653634633433613961 Jan 24 00:57:04.983035 kernel: audit: type=1334 audit(1769216224.909:582): prog-id=170 op=LOAD Jan 24 00:57:04.909000 audit: BPF prog-id=170 op=LOAD Jan 24 00:57:05.018464 kernel: audit: type=1300 audit(1769216224.909:582): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3413 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:04.909000 audit[3652]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3413 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:04.909000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639326365316136666133666530316264343036653634633433613961 Jan 24 00:57:05.057825 kernel: audit: type=1327 audit(1769216224.909:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639326365316136666133666530316264343036653634633433613961 Jan 24 00:57:05.058548 kernel: audit: type=1334 audit(1769216224.909:583): prog-id=170 op=UNLOAD Jan 24 00:57:04.909000 audit: BPF prog-id=170 op=UNLOAD Jan 24 00:57:04.909000 audit[3652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3413 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:05.103127 kernel: audit: type=1300 audit(1769216224.909:583): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3413 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:05.103234 kernel: audit: type=1327 audit(1769216224.909:583): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639326365316136666133666530316264343036653634633433613961 Jan 24 00:57:04.909000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639326365316136666133666530316264343036653634633433613961 Jan 24 00:57:05.152132 kernel: audit: type=1334 audit(1769216224.909:584): prog-id=169 op=UNLOAD Jan 24 00:57:04.909000 audit: BPF prog-id=169 op=UNLOAD Jan 24 00:57:04.909000 audit[3652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3413 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:04.909000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639326365316136666133666530316264343036653634633433613961 Jan 24 00:57:04.909000 audit: BPF prog-id=171 op=LOAD Jan 24 00:57:04.909000 audit[3652]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3413 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:04.909000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639326365316136666133666530316264343036653634633433613961 Jan 24 00:57:05.182065 containerd[1609]: time="2026-01-24T00:57:05.180379805Z" level=info msg="StartContainer for \"692ce1a6fa3fe01bd406e64c43a9a5a8014a3b7d1e13a2bc5ff7438c315cc2f4\" returns successfully" Jan 24 00:57:05.252457 kubelet[2883]: E0124 00:57:05.252179 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:05.969148 kubelet[2883]: E0124 00:57:05.966973 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:57:06.254326 kubelet[2883]: E0124 00:57:06.251569 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:07.173568 systemd[1]: cri-containerd-692ce1a6fa3fe01bd406e64c43a9a5a8014a3b7d1e13a2bc5ff7438c315cc2f4.scope: Deactivated successfully. Jan 24 00:57:07.174362 systemd[1]: cri-containerd-692ce1a6fa3fe01bd406e64c43a9a5a8014a3b7d1e13a2bc5ff7438c315cc2f4.scope: Consumed 1.530s CPU time, 177.7M memory peak, 4M read from disk, 171.3M written to disk. Jan 24 00:57:07.178957 containerd[1609]: time="2026-01-24T00:57:07.178047776Z" level=info msg="received container exit event container_id:\"692ce1a6fa3fe01bd406e64c43a9a5a8014a3b7d1e13a2bc5ff7438c315cc2f4\" id:\"692ce1a6fa3fe01bd406e64c43a9a5a8014a3b7d1e13a2bc5ff7438c315cc2f4\" pid:3666 exited_at:{seconds:1769216227 nanos:177556313}" Jan 24 00:57:07.181000 audit: BPF prog-id=171 op=UNLOAD Jan 24 00:57:07.249363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-692ce1a6fa3fe01bd406e64c43a9a5a8014a3b7d1e13a2bc5ff7438c315cc2f4-rootfs.mount: Deactivated successfully. Jan 24 00:57:07.254871 kubelet[2883]: I0124 00:57:07.253812 2883 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 00:57:07.489891 systemd[1]: Created slice kubepods-besteffort-pod92353cf3_f277_4007_be23_d786dc4f0d1c.slice - libcontainer container kubepods-besteffort-pod92353cf3_f277_4007_be23_d786dc4f0d1c.slice. Jan 24 00:57:07.507589 systemd[1]: Created slice kubepods-besteffort-pod5a9025b1_4c1d_4d71_8add_e1566c4e04cc.slice - libcontainer container kubepods-besteffort-pod5a9025b1_4c1d_4d71_8add_e1566c4e04cc.slice. Jan 24 00:57:07.533216 systemd[1]: Created slice kubepods-besteffort-pod70bde68b_f37d_4bad_bf48_1635753f011a.slice - libcontainer container kubepods-besteffort-pod70bde68b_f37d_4bad_bf48_1635753f011a.slice. Jan 24 00:57:07.555295 systemd[1]: Created slice kubepods-burstable-pod40f132ea_961b_46a8_b688_00155d8c6b06.slice - libcontainer container kubepods-burstable-pod40f132ea_961b_46a8_b688_00155d8c6b06.slice. Jan 24 00:57:07.570366 systemd[1]: Created slice kubepods-burstable-pod2bf7f94a_619e_473a_b5dc_1f8159f51234.slice - libcontainer container kubepods-burstable-pod2bf7f94a_619e_473a_b5dc_1f8159f51234.slice. Jan 24 00:57:07.577678 kubelet[2883]: I0124 00:57:07.576346 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9fdbb8ee-a6f4-499c-b584-8b75c3240604-goldmane-key-pair\") pod \"goldmane-666569f655-2256s\" (UID: \"9fdbb8ee-a6f4-499c-b584-8b75c3240604\") " pod="calico-system/goldmane-666569f655-2256s" Jan 24 00:57:07.577678 kubelet[2883]: I0124 00:57:07.576418 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5w2j\" (UniqueName: \"kubernetes.io/projected/6f07ac71-f9bf-4f16-8022-eeee9f625fbd-kube-api-access-s5w2j\") pod \"calico-apiserver-5985c58466-q852p\" (UID: \"6f07ac71-f9bf-4f16-8022-eeee9f625fbd\") " pod="calico-apiserver/calico-apiserver-5985c58466-q852p" Jan 24 00:57:07.577678 kubelet[2883]: I0124 00:57:07.576446 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjjng\" (UniqueName: \"kubernetes.io/projected/40f132ea-961b-46a8-b688-00155d8c6b06-kube-api-access-sjjng\") pod \"coredns-674b8bbfcf-2psxn\" (UID: \"40f132ea-961b-46a8-b688-00155d8c6b06\") " pod="kube-system/coredns-674b8bbfcf-2psxn" Jan 24 00:57:07.577678 kubelet[2883]: I0124 00:57:07.576476 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbgzw\" (UniqueName: \"kubernetes.io/projected/92353cf3-f277-4007-be23-d786dc4f0d1c-kube-api-access-lbgzw\") pod \"whisker-679b5fcd8-82p75\" (UID: \"92353cf3-f277-4007-be23-d786dc4f0d1c\") " pod="calico-system/whisker-679b5fcd8-82p75" Jan 24 00:57:07.577678 kubelet[2883]: I0124 00:57:07.576503 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fdbb8ee-a6f4-499c-b584-8b75c3240604-config\") pod \"goldmane-666569f655-2256s\" (UID: \"9fdbb8ee-a6f4-499c-b584-8b75c3240604\") " pod="calico-system/goldmane-666569f655-2256s" Jan 24 00:57:07.578103 kubelet[2883]: I0124 00:57:07.576527 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40f132ea-961b-46a8-b688-00155d8c6b06-config-volume\") pod \"coredns-674b8bbfcf-2psxn\" (UID: \"40f132ea-961b-46a8-b688-00155d8c6b06\") " pod="kube-system/coredns-674b8bbfcf-2psxn" Jan 24 00:57:07.578103 kubelet[2883]: I0124 00:57:07.576550 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bf7f94a-619e-473a-b5dc-1f8159f51234-config-volume\") pod \"coredns-674b8bbfcf-9qkfx\" (UID: \"2bf7f94a-619e-473a-b5dc-1f8159f51234\") " pod="kube-system/coredns-674b8bbfcf-9qkfx" Jan 24 00:57:07.578103 kubelet[2883]: I0124 00:57:07.576572 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwlhw\" (UniqueName: \"kubernetes.io/projected/2bf7f94a-619e-473a-b5dc-1f8159f51234-kube-api-access-bwlhw\") pod \"coredns-674b8bbfcf-9qkfx\" (UID: \"2bf7f94a-619e-473a-b5dc-1f8159f51234\") " pod="kube-system/coredns-674b8bbfcf-9qkfx" Jan 24 00:57:07.578103 kubelet[2883]: I0124 00:57:07.576597 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/92353cf3-f277-4007-be23-d786dc4f0d1c-whisker-backend-key-pair\") pod \"whisker-679b5fcd8-82p75\" (UID: \"92353cf3-f277-4007-be23-d786dc4f0d1c\") " pod="calico-system/whisker-679b5fcd8-82p75" Jan 24 00:57:07.578103 kubelet[2883]: I0124 00:57:07.576675 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fdbb8ee-a6f4-499c-b584-8b75c3240604-goldmane-ca-bundle\") pod \"goldmane-666569f655-2256s\" (UID: \"9fdbb8ee-a6f4-499c-b584-8b75c3240604\") " pod="calico-system/goldmane-666569f655-2256s" Jan 24 00:57:07.578283 kubelet[2883]: I0124 00:57:07.576778 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxh5t\" (UniqueName: \"kubernetes.io/projected/9fdbb8ee-a6f4-499c-b584-8b75c3240604-kube-api-access-lxh5t\") pod \"goldmane-666569f655-2256s\" (UID: \"9fdbb8ee-a6f4-499c-b584-8b75c3240604\") " pod="calico-system/goldmane-666569f655-2256s" Jan 24 00:57:07.578283 kubelet[2883]: I0124 00:57:07.576810 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f86n6\" (UniqueName: \"kubernetes.io/projected/5a9025b1-4c1d-4d71-8add-e1566c4e04cc-kube-api-access-f86n6\") pod \"calico-apiserver-5985c58466-tkqwx\" (UID: \"5a9025b1-4c1d-4d71-8add-e1566c4e04cc\") " pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" Jan 24 00:57:07.578283 kubelet[2883]: I0124 00:57:07.576835 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92353cf3-f277-4007-be23-d786dc4f0d1c-whisker-ca-bundle\") pod \"whisker-679b5fcd8-82p75\" (UID: \"92353cf3-f277-4007-be23-d786dc4f0d1c\") " pod="calico-system/whisker-679b5fcd8-82p75" Jan 24 00:57:07.578283 kubelet[2883]: I0124 00:57:07.576868 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5a9025b1-4c1d-4d71-8add-e1566c4e04cc-calico-apiserver-certs\") pod \"calico-apiserver-5985c58466-tkqwx\" (UID: \"5a9025b1-4c1d-4d71-8add-e1566c4e04cc\") " pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" Jan 24 00:57:07.578283 kubelet[2883]: I0124 00:57:07.576905 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70bde68b-f37d-4bad-bf48-1635753f011a-tigera-ca-bundle\") pod \"calico-kube-controllers-5b9dc86db-sl4tg\" (UID: \"70bde68b-f37d-4bad-bf48-1635753f011a\") " pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" Jan 24 00:57:07.578458 kubelet[2883]: I0124 00:57:07.576929 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8624g\" (UniqueName: \"kubernetes.io/projected/70bde68b-f37d-4bad-bf48-1635753f011a-kube-api-access-8624g\") pod \"calico-kube-controllers-5b9dc86db-sl4tg\" (UID: \"70bde68b-f37d-4bad-bf48-1635753f011a\") " pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" Jan 24 00:57:07.578458 kubelet[2883]: I0124 00:57:07.576954 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6f07ac71-f9bf-4f16-8022-eeee9f625fbd-calico-apiserver-certs\") pod \"calico-apiserver-5985c58466-q852p\" (UID: \"6f07ac71-f9bf-4f16-8022-eeee9f625fbd\") " pod="calico-apiserver/calico-apiserver-5985c58466-q852p" Jan 24 00:57:07.594034 systemd[1]: Created slice kubepods-besteffort-pod6f07ac71_f9bf_4f16_8022_eeee9f625fbd.slice - libcontainer container kubepods-besteffort-pod6f07ac71_f9bf_4f16_8022_eeee9f625fbd.slice. Jan 24 00:57:07.607992 systemd[1]: Created slice kubepods-besteffort-pod9fdbb8ee_a6f4_499c_b584_8b75c3240604.slice - libcontainer container kubepods-besteffort-pod9fdbb8ee_a6f4_499c_b584_8b75c3240604.slice. Jan 24 00:57:07.805658 containerd[1609]: time="2026-01-24T00:57:07.805265935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-679b5fcd8-82p75,Uid:92353cf3-f277-4007-be23-d786dc4f0d1c,Namespace:calico-system,Attempt:0,}" Jan 24 00:57:07.814162 containerd[1609]: time="2026-01-24T00:57:07.814101442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5985c58466-tkqwx,Uid:5a9025b1-4c1d-4d71-8add-e1566c4e04cc,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:57:07.849155 containerd[1609]: time="2026-01-24T00:57:07.848573737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b9dc86db-sl4tg,Uid:70bde68b-f37d-4bad-bf48-1635753f011a,Namespace:calico-system,Attempt:0,}" Jan 24 00:57:07.865134 kubelet[2883]: E0124 00:57:07.865040 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:07.868382 containerd[1609]: time="2026-01-24T00:57:07.867551513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2psxn,Uid:40f132ea-961b-46a8-b688-00155d8c6b06,Namespace:kube-system,Attempt:0,}" Jan 24 00:57:07.886039 kubelet[2883]: E0124 00:57:07.885838 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:07.887665 containerd[1609]: time="2026-01-24T00:57:07.887569228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9qkfx,Uid:2bf7f94a-619e-473a-b5dc-1f8159f51234,Namespace:kube-system,Attempt:0,}" Jan 24 00:57:07.902001 containerd[1609]: time="2026-01-24T00:57:07.901882618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5985c58466-q852p,Uid:6f07ac71-f9bf-4f16-8022-eeee9f625fbd,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:57:07.914436 containerd[1609]: time="2026-01-24T00:57:07.914361908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2256s,Uid:9fdbb8ee-a6f4-499c-b584-8b75c3240604,Namespace:calico-system,Attempt:0,}" Jan 24 00:57:08.021859 systemd[1]: Created slice kubepods-besteffort-pode6e0379d_4209_43c1_9c94_53533c368367.slice - libcontainer container kubepods-besteffort-pode6e0379d_4209_43c1_9c94_53533c368367.slice. Jan 24 00:57:08.050098 containerd[1609]: time="2026-01-24T00:57:08.050009562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rkd9m,Uid:e6e0379d-4209-43c1-9c94-53533c368367,Namespace:calico-system,Attempt:0,}" Jan 24 00:57:08.271310 kubelet[2883]: E0124 00:57:08.271225 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:08.273119 containerd[1609]: time="2026-01-24T00:57:08.272358579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:57:08.319140 containerd[1609]: time="2026-01-24T00:57:08.318602129Z" level=error msg="Failed to destroy network for sandbox \"d312552c939dd59615fa2ef7ff357f0877fa1574f46605f9b395084b1d6b7f8e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:08.325448 systemd[1]: run-netns-cni\x2d6f3c58b6\x2db93e\x2db9e5\x2dca8e\x2d32d516ad82db.mount: Deactivated successfully. Jan 24 00:57:08.351911 containerd[1609]: time="2026-01-24T00:57:08.351849954Z" level=error msg="Failed to destroy network for sandbox \"b9284fc685e3837f98fd3e7f014ca1225801697bd6737a949b4cbbcc530fa318\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:08.360115 systemd[1]: run-netns-cni\x2df098f1ad\x2dc65e\x2da2de\x2d8893\x2d5f0fd60c69e9.mount: Deactivated successfully. Jan 24 00:57:08.372778 containerd[1609]: time="2026-01-24T00:57:08.369878075Z" level=error msg="Failed to destroy network for sandbox \"9d50532a8242e79750cae9c92c5a521d5bab125e1d1997f35d6e22b2471e0ce3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:08.376061 containerd[1609]: time="2026-01-24T00:57:08.372957556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5985c58466-tkqwx,Uid:5a9025b1-4c1d-4d71-8add-e1566c4e04cc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d312552c939dd59615fa2ef7ff357f0877fa1574f46605f9b395084b1d6b7f8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:08.377179 systemd[1]: run-netns-cni\x2d43262cfa\x2d2e7e\x2dd872\x2d38cd\x2d68299664b9de.mount: Deactivated successfully. Jan 24 00:57:08.377694 kubelet[2883]: E0124 00:57:08.377568 2883 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d312552c939dd59615fa2ef7ff357f0877fa1574f46605f9b395084b1d6b7f8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:08.378282 kubelet[2883]: E0124 00:57:08.377788 2883 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d312552c939dd59615fa2ef7ff357f0877fa1574f46605f9b395084b1d6b7f8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" Jan 24 00:57:08.378282 kubelet[2883]: E0124 00:57:08.377821 2883 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d312552c939dd59615fa2ef7ff357f0877fa1574f46605f9b395084b1d6b7f8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" Jan 24 00:57:08.380699 kubelet[2883]: E0124 00:57:08.378331 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5985c58466-tkqwx_calico-apiserver(5a9025b1-4c1d-4d71-8add-e1566c4e04cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5985c58466-tkqwx_calico-apiserver(5a9025b1-4c1d-4d71-8add-e1566c4e04cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d312552c939dd59615fa2ef7ff357f0877fa1574f46605f9b395084b1d6b7f8e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 00:57:08.385604 containerd[1609]: time="2026-01-24T00:57:08.384972456Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9qkfx,Uid:2bf7f94a-619e-473a-b5dc-1f8159f51234,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9284fc685e3837f98fd3e7f014ca1225801697bd6737a949b4cbbcc530fa318\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:08.385855 containerd[1609]: time="2026-01-24T00:57:08.385249352Z" level=error msg="Failed to destroy network for sandbox \"dbbe3951d80e4ecdc2e33ec90dafbd56999abc00ebb89ab4e700a904cd215c50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:08.387656 kubelet[2883]: E0124 00:57:08.387252 2883 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9284fc685e3837f98fd3e7f014ca1225801697bd6737a949b4cbbcc530fa318\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:08.388315 kubelet[2883]: E0124 00:57:08.388220 2883 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9284fc685e3837f98fd3e7f014ca1225801697bd6737a949b4cbbcc530fa318\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9qkfx" Jan 24 00:57:08.391112 kubelet[2883]: E0124 00:57:08.388472 2883 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9284fc685e3837f98fd3e7f014ca1225801697bd6737a949b4cbbcc530fa318\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9qkfx" Jan 24 00:57:08.391360 systemd[1]: run-netns-cni\x2d76211fa8\x2db5fb\x2d3bc8\x2df06c\x2d4e9ec65eacff.mount: Deactivated successfully. Jan 24 00:57:08.393831 kubelet[2883]: E0124 00:57:08.392854 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-9qkfx_kube-system(2bf7f94a-619e-473a-b5dc-1f8159f51234)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-9qkfx_kube-system(2bf7f94a-619e-473a-b5dc-1f8159f51234)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9284fc685e3837f98fd3e7f014ca1225801697bd6737a949b4cbbcc530fa318\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-9qkfx" podUID="2bf7f94a-619e-473a-b5dc-1f8159f51234" Jan 24 00:57:08.402909 containerd[1609]: time="2026-01-24T00:57:08.402834070Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b9dc86db-sl4tg,Uid:70bde68b-f37d-4bad-bf48-1635753f011a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbbe3951d80e4ecdc2e33ec90dafbd56999abc00ebb89ab4e700a904cd215c50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:08.404995 kubelet[2883]: E0124 00:57:08.404255 2883 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbbe3951d80e4ecdc2e33ec90dafbd56999abc00ebb89ab4e700a904cd215c50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:08.404995 kubelet[2883]: E0124 00:57:08.404490 2883 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbbe3951d80e4ecdc2e33ec90dafbd56999abc00ebb89ab4e700a904cd215c50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" Jan 24 00:57:08.404995 kubelet[2883]: E0124 00:57:08.404819 2883 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d50532a8242e79750cae9c92c5a521d5bab125e1d1997f35d6e22b2471e0ce3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:08.404995 kubelet[2883]: E0124 00:57:08.404880 2883 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d50532a8242e79750cae9c92c5a521d5bab125e1d1997f35d6e22b2471e0ce3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-679b5fcd8-82p75" Jan 24 00:57:08.405248 containerd[1609]: time="2026-01-24T00:57:08.404422082Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-679b5fcd8-82p75,Uid:92353cf3-f277-4007-be23-d786dc4f0d1c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d50532a8242e79750cae9c92c5a521d5bab125e1d1997f35d6e22b2471e0ce3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:08.405499 kubelet[2883]: E0124 00:57:08.404910 2883 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d50532a8242e79750cae9c92c5a521d5bab125e1d1997f35d6e22b2471e0ce3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-679b5fcd8-82p75" Jan 24 00:57:08.405499 kubelet[2883]: E0124 00:57:08.404974 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-679b5fcd8-82p75_calico-system(92353cf3-f277-4007-be23-d786dc4f0d1c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-679b5fcd8-82p75_calico-system(92353cf3-f277-4007-be23-d786dc4f0d1c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d50532a8242e79750cae9c92c5a521d5bab125e1d1997f35d6e22b2471e0ce3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-679b5fcd8-82p75" podUID="92353cf3-f277-4007-be23-d786dc4f0d1c" Jan 24 00:57:08.406326 kubelet[2883]: E0124 00:57:08.404524 2883 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbbe3951d80e4ecdc2e33ec90dafbd56999abc00ebb89ab4e700a904cd215c50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" Jan 24 00:57:08.408066 containerd[1609]: time="2026-01-24T00:57:08.406230628Z" level=error msg="Failed to destroy network for sandbox \"f7c3ad95a7aa0a8f7b86f1daf7490674467f05690e58edfaa321d8b70ea36a14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:08.408456 kubelet[2883]: E0124 00:57:08.408325 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b9dc86db-sl4tg_calico-system(70bde68b-f37d-4bad-bf48-1635753f011a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b9dc86db-sl4tg_calico-system(70bde68b-f37d-4bad-bf48-1635753f011a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbbe3951d80e4ecdc2e33ec90dafbd56999abc00ebb89ab4e700a904cd215c50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 00:57:08.416779 containerd[1609]: time="2026-01-24T00:57:08.416281553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2psxn,Uid:40f132ea-961b-46a8-b688-00155d8c6b06,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7c3ad95a7aa0a8f7b86f1daf7490674467f05690e58edfaa321d8b70ea36a14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:08.417312 kubelet[2883]: E0124 00:57:08.417269 2883 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7c3ad95a7aa0a8f7b86f1daf7490674467f05690e58edfaa321d8b70ea36a14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:08.417867 kubelet[2883]: E0124 00:57:08.417834 2883 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7c3ad95a7aa0a8f7b86f1daf7490674467f05690e58edfaa321d8b70ea36a14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2psxn" Jan 24 00:57:08.418041 kubelet[2883]: E0124 00:57:08.417985 2883 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7c3ad95a7aa0a8f7b86f1daf7490674467f05690e58edfaa321d8b70ea36a14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2psxn" Jan 24 00:57:08.418679 kubelet[2883]: E0124 00:57:08.418582 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2psxn_kube-system(40f132ea-961b-46a8-b688-00155d8c6b06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2psxn_kube-system(40f132ea-961b-46a8-b688-00155d8c6b06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7c3ad95a7aa0a8f7b86f1daf7490674467f05690e58edfaa321d8b70ea36a14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2psxn" podUID="40f132ea-961b-46a8-b688-00155d8c6b06" Jan 24 00:57:08.453272 containerd[1609]: time="2026-01-24T00:57:08.453181803Z" level=error msg="Failed to destroy network for sandbox \"1825a627896f8e2d6e2acad5ac0af0e9be9c38423d824bcb8cfd2e15e04c62fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:08.454800 containerd[1609]: time="2026-01-24T00:57:08.454756900Z" level=error msg="Failed to destroy network for sandbox \"4b6f209e58dad4d16fe84360834073bb900d66a3c728b92a022daf301a8cf23a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:08.461683 containerd[1609]: time="2026-01-24T00:57:08.461469295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2256s,Uid:9fdbb8ee-a6f4-499c-b584-8b75c3240604,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1825a627896f8e2d6e2acad5ac0af0e9be9c38423d824bcb8cfd2e15e04c62fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:08.463372 kubelet[2883]: E0124 00:57:08.462681 2883 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1825a627896f8e2d6e2acad5ac0af0e9be9c38423d824bcb8cfd2e15e04c62fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:08.463372 kubelet[2883]: E0124 00:57:08.463105 2883 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1825a627896f8e2d6e2acad5ac0af0e9be9c38423d824bcb8cfd2e15e04c62fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-2256s" Jan 24 00:57:08.463372 kubelet[2883]: E0124 00:57:08.463136 2883 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1825a627896f8e2d6e2acad5ac0af0e9be9c38423d824bcb8cfd2e15e04c62fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-2256s" Jan 24 00:57:08.463590 kubelet[2883]: E0124 00:57:08.463203 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-2256s_calico-system(9fdbb8ee-a6f4-499c-b584-8b75c3240604)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-2256s_calico-system(9fdbb8ee-a6f4-499c-b584-8b75c3240604)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1825a627896f8e2d6e2acad5ac0af0e9be9c38423d824bcb8cfd2e15e04c62fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 00:57:08.468084 containerd[1609]: time="2026-01-24T00:57:08.467983859Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rkd9m,Uid:e6e0379d-4209-43c1-9c94-53533c368367,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b6f209e58dad4d16fe84360834073bb900d66a3c728b92a022daf301a8cf23a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:08.468376 kubelet[2883]: E0124 00:57:08.468326 2883 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b6f209e58dad4d16fe84360834073bb900d66a3c728b92a022daf301a8cf23a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:08.468949 kubelet[2883]: E0124 00:57:08.468688 2883 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b6f209e58dad4d16fe84360834073bb900d66a3c728b92a022daf301a8cf23a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rkd9m" Jan 24 00:57:08.468949 kubelet[2883]: E0124 00:57:08.468815 2883 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b6f209e58dad4d16fe84360834073bb900d66a3c728b92a022daf301a8cf23a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rkd9m" Jan 24 00:57:08.468949 kubelet[2883]: E0124 00:57:08.468883 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rkd9m_calico-system(e6e0379d-4209-43c1-9c94-53533c368367)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rkd9m_calico-system(e6e0379d-4209-43c1-9c94-53533c368367)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b6f209e58dad4d16fe84360834073bb900d66a3c728b92a022daf301a8cf23a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:57:08.470125 containerd[1609]: time="2026-01-24T00:57:08.470021036Z" level=error msg="Failed to destroy network for sandbox \"4dbcd4850f345089630540da20fbeebb0fb00aa9e2017645dbad27ede1ac2021\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:08.478451 containerd[1609]: time="2026-01-24T00:57:08.478298384Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5985c58466-q852p,Uid:6f07ac71-f9bf-4f16-8022-eeee9f625fbd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dbcd4850f345089630540da20fbeebb0fb00aa9e2017645dbad27ede1ac2021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:08.479121 kubelet[2883]: E0124 00:57:08.479070 2883 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dbcd4850f345089630540da20fbeebb0fb00aa9e2017645dbad27ede1ac2021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:08.479371 kubelet[2883]: E0124 00:57:08.479274 2883 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dbcd4850f345089630540da20fbeebb0fb00aa9e2017645dbad27ede1ac2021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" Jan 24 00:57:08.479371 kubelet[2883]: E0124 00:57:08.479349 2883 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dbcd4850f345089630540da20fbeebb0fb00aa9e2017645dbad27ede1ac2021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" Jan 24 00:57:08.479886 kubelet[2883]: E0124 00:57:08.479433 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5985c58466-q852p_calico-apiserver(6f07ac71-f9bf-4f16-8022-eeee9f625fbd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5985c58466-q852p_calico-apiserver(6f07ac71-f9bf-4f16-8022-eeee9f625fbd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4dbcd4850f345089630540da20fbeebb0fb00aa9e2017645dbad27ede1ac2021\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 00:57:09.249667 systemd[1]: run-netns-cni\x2df4569d53\x2d33a2\x2ddf5d\x2d0124\x2d14044230d5b3.mount: Deactivated successfully. Jan 24 00:57:09.249911 systemd[1]: run-netns-cni\x2d96ac3c18\x2d90f8\x2dfd3b\x2d07f5\x2d9cab7c2eaf8e.mount: Deactivated successfully. Jan 24 00:57:09.250008 systemd[1]: run-netns-cni\x2df06da66b\x2d8ac6\x2dbde1\x2dcf19\x2d6870bdf85dfd.mount: Deactivated successfully. Jan 24 00:57:09.250094 systemd[1]: run-netns-cni\x2d6f178122\x2d0c18\x2d1208\x2df225\x2d8de16974d765.mount: Deactivated successfully. Jan 24 00:57:18.968382 containerd[1609]: time="2026-01-24T00:57:18.967834348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b9dc86db-sl4tg,Uid:70bde68b-f37d-4bad-bf48-1635753f011a,Namespace:calico-system,Attempt:0,}" Jan 24 00:57:18.969195 containerd[1609]: time="2026-01-24T00:57:18.968226789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5985c58466-tkqwx,Uid:5a9025b1-4c1d-4d71-8add-e1566c4e04cc,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:57:19.256424 containerd[1609]: time="2026-01-24T00:57:19.254816333Z" level=error msg="Failed to destroy network for sandbox \"743019f6b397887fdd943dd7ff58b27e1a3ab21b82c1ac2506ea39fa91152878\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:19.259024 systemd[1]: run-netns-cni\x2d2c5a25ff\x2d1e22\x2db71d\x2d9efe\x2db7589e28940f.mount: Deactivated successfully. Jan 24 00:57:19.357253 containerd[1609]: time="2026-01-24T00:57:19.355159343Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5985c58466-tkqwx,Uid:5a9025b1-4c1d-4d71-8add-e1566c4e04cc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"743019f6b397887fdd943dd7ff58b27e1a3ab21b82c1ac2506ea39fa91152878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:19.362879 kubelet[2883]: E0124 00:57:19.357849 2883 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"743019f6b397887fdd943dd7ff58b27e1a3ab21b82c1ac2506ea39fa91152878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:19.362879 kubelet[2883]: E0124 00:57:19.359052 2883 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"743019f6b397887fdd943dd7ff58b27e1a3ab21b82c1ac2506ea39fa91152878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" Jan 24 00:57:19.362879 kubelet[2883]: E0124 00:57:19.359092 2883 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"743019f6b397887fdd943dd7ff58b27e1a3ab21b82c1ac2506ea39fa91152878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" Jan 24 00:57:19.364269 kubelet[2883]: E0124 00:57:19.359456 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5985c58466-tkqwx_calico-apiserver(5a9025b1-4c1d-4d71-8add-e1566c4e04cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5985c58466-tkqwx_calico-apiserver(5a9025b1-4c1d-4d71-8add-e1566c4e04cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"743019f6b397887fdd943dd7ff58b27e1a3ab21b82c1ac2506ea39fa91152878\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 00:57:19.398136 containerd[1609]: time="2026-01-24T00:57:19.395092418Z" level=error msg="Failed to destroy network for sandbox \"73a627bb60345aa9557d38b08df828aa65b9970ff62a33f9a32ad07e2baa223d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:19.400079 systemd[1]: run-netns-cni\x2d33a4b84d\x2ddffa\x2d78fb\x2d9b0d\x2d251524a957af.mount: Deactivated successfully. Jan 24 00:57:19.408045 containerd[1609]: time="2026-01-24T00:57:19.407889098Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b9dc86db-sl4tg,Uid:70bde68b-f37d-4bad-bf48-1635753f011a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"73a627bb60345aa9557d38b08df828aa65b9970ff62a33f9a32ad07e2baa223d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:19.409812 kubelet[2883]: E0124 00:57:19.408664 2883 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73a627bb60345aa9557d38b08df828aa65b9970ff62a33f9a32ad07e2baa223d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:19.409812 kubelet[2883]: E0124 00:57:19.408866 2883 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73a627bb60345aa9557d38b08df828aa65b9970ff62a33f9a32ad07e2baa223d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" Jan 24 00:57:19.409812 kubelet[2883]: E0124 00:57:19.408894 2883 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73a627bb60345aa9557d38b08df828aa65b9970ff62a33f9a32ad07e2baa223d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" Jan 24 00:57:19.409953 kubelet[2883]: E0124 00:57:19.408952 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b9dc86db-sl4tg_calico-system(70bde68b-f37d-4bad-bf48-1635753f011a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b9dc86db-sl4tg_calico-system(70bde68b-f37d-4bad-bf48-1635753f011a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73a627bb60345aa9557d38b08df828aa65b9970ff62a33f9a32ad07e2baa223d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 00:57:19.968998 containerd[1609]: time="2026-01-24T00:57:19.968847630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-679b5fcd8-82p75,Uid:92353cf3-f277-4007-be23-d786dc4f0d1c,Namespace:calico-system,Attempt:0,}" Jan 24 00:57:19.997054 containerd[1609]: time="2026-01-24T00:57:19.969282530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5985c58466-q852p,Uid:6f07ac71-f9bf-4f16-8022-eeee9f625fbd,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:57:20.408213 containerd[1609]: time="2026-01-24T00:57:20.407933003Z" level=error msg="Failed to destroy network for sandbox \"5ef11d2892234a1d2c94bd3fd4b87e3954cf807b9ae7153ebd97e075fde0f7ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:20.412973 systemd[1]: run-netns-cni\x2d497227f3\x2dcd99\x2df99d\x2d82a0\x2deddb0f13318b.mount: Deactivated successfully. Jan 24 00:57:20.424909 containerd[1609]: time="2026-01-24T00:57:20.424059662Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5985c58466-q852p,Uid:6f07ac71-f9bf-4f16-8022-eeee9f625fbd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ef11d2892234a1d2c94bd3fd4b87e3954cf807b9ae7153ebd97e075fde0f7ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:20.425326 kubelet[2883]: E0124 00:57:20.425052 2883 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ef11d2892234a1d2c94bd3fd4b87e3954cf807b9ae7153ebd97e075fde0f7ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:20.425326 kubelet[2883]: E0124 00:57:20.425309 2883 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ef11d2892234a1d2c94bd3fd4b87e3954cf807b9ae7153ebd97e075fde0f7ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" Jan 24 00:57:20.426109 kubelet[2883]: E0124 00:57:20.425503 2883 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ef11d2892234a1d2c94bd3fd4b87e3954cf807b9ae7153ebd97e075fde0f7ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" Jan 24 00:57:20.426275 kubelet[2883]: E0124 00:57:20.426000 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5985c58466-q852p_calico-apiserver(6f07ac71-f9bf-4f16-8022-eeee9f625fbd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5985c58466-q852p_calico-apiserver(6f07ac71-f9bf-4f16-8022-eeee9f625fbd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ef11d2892234a1d2c94bd3fd4b87e3954cf807b9ae7153ebd97e075fde0f7ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 00:57:20.486969 containerd[1609]: time="2026-01-24T00:57:20.486449418Z" level=error msg="Failed to destroy network for sandbox \"98172ea038d36495eb6e20648319e2f07a295179b48736812201db820e0b468d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:20.493973 systemd[1]: run-netns-cni\x2dc35effd6\x2dce7c\x2da765\x2d36f5\x2d7be04ee0c226.mount: Deactivated successfully. Jan 24 00:57:20.500764 containerd[1609]: time="2026-01-24T00:57:20.500257570Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-679b5fcd8-82p75,Uid:92353cf3-f277-4007-be23-d786dc4f0d1c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"98172ea038d36495eb6e20648319e2f07a295179b48736812201db820e0b468d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:20.501015 kubelet[2883]: E0124 00:57:20.500876 2883 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98172ea038d36495eb6e20648319e2f07a295179b48736812201db820e0b468d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:20.501015 kubelet[2883]: E0124 00:57:20.500945 2883 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98172ea038d36495eb6e20648319e2f07a295179b48736812201db820e0b468d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-679b5fcd8-82p75" Jan 24 00:57:20.501015 kubelet[2883]: E0124 00:57:20.500973 2883 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98172ea038d36495eb6e20648319e2f07a295179b48736812201db820e0b468d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-679b5fcd8-82p75" Jan 24 00:57:20.501871 kubelet[2883]: E0124 00:57:20.501547 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-679b5fcd8-82p75_calico-system(92353cf3-f277-4007-be23-d786dc4f0d1c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-679b5fcd8-82p75_calico-system(92353cf3-f277-4007-be23-d786dc4f0d1c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98172ea038d36495eb6e20648319e2f07a295179b48736812201db820e0b468d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-679b5fcd8-82p75" podUID="92353cf3-f277-4007-be23-d786dc4f0d1c" Jan 24 00:57:20.897295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2531491907.mount: Deactivated successfully. Jan 24 00:57:20.957785 containerd[1609]: time="2026-01-24T00:57:20.957358497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:20.962023 containerd[1609]: time="2026-01-24T00:57:20.961977486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 24 00:57:20.967802 kubelet[2883]: E0124 00:57:20.966170 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:20.967928 containerd[1609]: time="2026-01-24T00:57:20.967023825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9qkfx,Uid:2bf7f94a-619e-473a-b5dc-1f8159f51234,Namespace:kube-system,Attempt:0,}" Jan 24 00:57:20.971694 containerd[1609]: time="2026-01-24T00:57:20.969912807Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:20.987334 containerd[1609]: time="2026-01-24T00:57:20.987297726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:20.991906 containerd[1609]: time="2026-01-24T00:57:20.991827133Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 12.719422579s" Jan 24 00:57:20.991906 containerd[1609]: time="2026-01-24T00:57:20.991895651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 00:57:21.026118 containerd[1609]: time="2026-01-24T00:57:21.025993866Z" level=info msg="CreateContainer within sandbox \"2112e1ce16d0b72519c1b74279a3be569e18320c62df6fa82870bb19742ad179\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:57:21.050450 containerd[1609]: time="2026-01-24T00:57:21.050162927Z" level=info msg="Container 4b510ba95a7a7af045beea48384e971bbd66774418a44a25b020997c4cb926c6: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:57:21.083422 containerd[1609]: time="2026-01-24T00:57:21.083320071Z" level=info msg="CreateContainer within sandbox \"2112e1ce16d0b72519c1b74279a3be569e18320c62df6fa82870bb19742ad179\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4b510ba95a7a7af045beea48384e971bbd66774418a44a25b020997c4cb926c6\"" Jan 24 00:57:21.085475 containerd[1609]: time="2026-01-24T00:57:21.084970858Z" level=info msg="StartContainer for \"4b510ba95a7a7af045beea48384e971bbd66774418a44a25b020997c4cb926c6\"" Jan 24 00:57:21.093038 containerd[1609]: time="2026-01-24T00:57:21.093000039Z" level=info msg="connecting to shim 4b510ba95a7a7af045beea48384e971bbd66774418a44a25b020997c4cb926c6" address="unix:///run/containerd/s/24c0499cea0fa9297b67d71a4cd3f76c692b5592b74a9394f539c67345f70dd4" protocol=ttrpc version=3 Jan 24 00:57:21.104339 containerd[1609]: time="2026-01-24T00:57:21.104247080Z" level=error msg="Failed to destroy network for sandbox \"4c1a283c744ca7aacbb5ea83b1122265e2bf8b4989f78190ec54eaa4dd001d4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:21.108008 systemd[1]: run-netns-cni\x2d729205be\x2dcfa1\x2d70cd\x2dca54\x2d43e9cd7d3e71.mount: Deactivated successfully. Jan 24 00:57:21.109407 containerd[1609]: time="2026-01-24T00:57:21.109293557Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9qkfx,Uid:2bf7f94a-619e-473a-b5dc-1f8159f51234,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c1a283c744ca7aacbb5ea83b1122265e2bf8b4989f78190ec54eaa4dd001d4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:21.110889 kubelet[2883]: E0124 00:57:21.109791 2883 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c1a283c744ca7aacbb5ea83b1122265e2bf8b4989f78190ec54eaa4dd001d4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:21.110889 kubelet[2883]: E0124 00:57:21.109861 2883 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c1a283c744ca7aacbb5ea83b1122265e2bf8b4989f78190ec54eaa4dd001d4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9qkfx" Jan 24 00:57:21.110889 kubelet[2883]: E0124 00:57:21.109893 2883 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c1a283c744ca7aacbb5ea83b1122265e2bf8b4989f78190ec54eaa4dd001d4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9qkfx" Jan 24 00:57:21.111043 kubelet[2883]: E0124 00:57:21.110004 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-9qkfx_kube-system(2bf7f94a-619e-473a-b5dc-1f8159f51234)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-9qkfx_kube-system(2bf7f94a-619e-473a-b5dc-1f8159f51234)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c1a283c744ca7aacbb5ea83b1122265e2bf8b4989f78190ec54eaa4dd001d4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-9qkfx" podUID="2bf7f94a-619e-473a-b5dc-1f8159f51234" Jan 24 00:57:21.145042 systemd[1]: Started cri-containerd-4b510ba95a7a7af045beea48384e971bbd66774418a44a25b020997c4cb926c6.scope - libcontainer container 4b510ba95a7a7af045beea48384e971bbd66774418a44a25b020997c4cb926c6. Jan 24 00:57:21.298589 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 24 00:57:21.300434 kernel: audit: type=1334 audit(1769216241.283:587): prog-id=172 op=LOAD Jan 24 00:57:21.300487 kernel: audit: type=1300 audit(1769216241.283:587): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3413 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:21.283000 audit: BPF prog-id=172 op=LOAD Jan 24 00:57:21.283000 audit[4136]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3413 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:21.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462353130626139356137613761663034356265656134383338346539 Jan 24 00:57:21.349791 kernel: audit: type=1327 audit(1769216241.283:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462353130626139356137613761663034356265656134383338346539 Jan 24 00:57:21.349915 kernel: audit: type=1334 audit(1769216241.283:588): prog-id=173 op=LOAD Jan 24 00:57:21.283000 audit: BPF prog-id=173 op=LOAD Jan 24 00:57:21.354836 kernel: audit: type=1300 audit(1769216241.283:588): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3413 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:21.283000 audit[4136]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3413 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:21.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462353130626139356137613761663034356265656134383338346539 Jan 24 00:57:21.388101 kernel: audit: type=1327 audit(1769216241.283:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462353130626139356137613761663034356265656134383338346539 Jan 24 00:57:21.388235 kernel: audit: type=1334 audit(1769216241.283:589): prog-id=173 op=UNLOAD Jan 24 00:57:21.283000 audit: BPF prog-id=173 op=UNLOAD Jan 24 00:57:21.393010 kernel: audit: type=1300 audit(1769216241.283:589): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3413 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:21.283000 audit[4136]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3413 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:21.407849 kernel: audit: type=1327 audit(1769216241.283:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462353130626139356137613761663034356265656134383338346539 Jan 24 00:57:21.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462353130626139356137613761663034356265656134383338346539 Jan 24 00:57:21.424848 kernel: audit: type=1334 audit(1769216241.283:590): prog-id=172 op=UNLOAD Jan 24 00:57:21.283000 audit: BPF prog-id=172 op=UNLOAD Jan 24 00:57:21.283000 audit[4136]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3413 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:21.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462353130626139356137613761663034356265656134383338346539 Jan 24 00:57:21.283000 audit: BPF prog-id=174 op=LOAD Jan 24 00:57:21.283000 audit[4136]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3413 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:21.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462353130626139356137613761663034356265656134383338346539 Jan 24 00:57:21.461406 containerd[1609]: time="2026-01-24T00:57:21.461237802Z" level=info msg="StartContainer for \"4b510ba95a7a7af045beea48384e971bbd66774418a44a25b020997c4cb926c6\" returns successfully" Jan 24 00:57:21.693009 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:57:21.693122 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:57:21.966915 kubelet[2883]: E0124 00:57:21.965411 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:21.967528 containerd[1609]: time="2026-01-24T00:57:21.966401794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2psxn,Uid:40f132ea-961b-46a8-b688-00155d8c6b06,Namespace:kube-system,Attempt:0,}" Jan 24 00:57:21.978689 containerd[1609]: time="2026-01-24T00:57:21.972118852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2256s,Uid:9fdbb8ee-a6f4-499c-b584-8b75c3240604,Namespace:calico-system,Attempt:0,}" Jan 24 00:57:22.047158 kubelet[2883]: I0124 00:57:22.047003 2883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbgzw\" (UniqueName: \"kubernetes.io/projected/92353cf3-f277-4007-be23-d786dc4f0d1c-kube-api-access-lbgzw\") pod \"92353cf3-f277-4007-be23-d786dc4f0d1c\" (UID: \"92353cf3-f277-4007-be23-d786dc4f0d1c\") " Jan 24 00:57:22.053485 kubelet[2883]: I0124 00:57:22.052223 2883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92353cf3-f277-4007-be23-d786dc4f0d1c-whisker-ca-bundle\") pod \"92353cf3-f277-4007-be23-d786dc4f0d1c\" (UID: \"92353cf3-f277-4007-be23-d786dc4f0d1c\") " Jan 24 00:57:22.053485 kubelet[2883]: I0124 00:57:22.052289 2883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/92353cf3-f277-4007-be23-d786dc4f0d1c-whisker-backend-key-pair\") pod \"92353cf3-f277-4007-be23-d786dc4f0d1c\" (UID: \"92353cf3-f277-4007-be23-d786dc4f0d1c\") " Jan 24 00:57:22.055432 kubelet[2883]: I0124 00:57:22.055395 2883 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92353cf3-f277-4007-be23-d786dc4f0d1c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "92353cf3-f277-4007-be23-d786dc4f0d1c" (UID: "92353cf3-f277-4007-be23-d786dc4f0d1c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:57:22.069940 systemd[1]: var-lib-kubelet-pods-92353cf3\x2df277\x2d4007\x2dbe23\x2dd786dc4f0d1c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlbgzw.mount: Deactivated successfully. Jan 24 00:57:22.077926 kubelet[2883]: I0124 00:57:22.077579 2883 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92353cf3-f277-4007-be23-d786dc4f0d1c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "92353cf3-f277-4007-be23-d786dc4f0d1c" (UID: "92353cf3-f277-4007-be23-d786dc4f0d1c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:57:22.079240 kubelet[2883]: I0124 00:57:22.079122 2883 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92353cf3-f277-4007-be23-d786dc4f0d1c-kube-api-access-lbgzw" (OuterVolumeSpecName: "kube-api-access-lbgzw") pod "92353cf3-f277-4007-be23-d786dc4f0d1c" (UID: "92353cf3-f277-4007-be23-d786dc4f0d1c"). InnerVolumeSpecName "kube-api-access-lbgzw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:57:22.080547 systemd[1]: var-lib-kubelet-pods-92353cf3\x2df277\x2d4007\x2dbe23\x2dd786dc4f0d1c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 00:57:22.155790 kubelet[2883]: I0124 00:57:22.154043 2883 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/92353cf3-f277-4007-be23-d786dc4f0d1c-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 24 00:57:22.155790 kubelet[2883]: I0124 00:57:22.154095 2883 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lbgzw\" (UniqueName: \"kubernetes.io/projected/92353cf3-f277-4007-be23-d786dc4f0d1c-kube-api-access-lbgzw\") on node \"localhost\" DevicePath \"\"" Jan 24 00:57:22.155790 kubelet[2883]: I0124 00:57:22.154113 2883 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92353cf3-f277-4007-be23-d786dc4f0d1c-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 24 00:57:22.406780 kubelet[2883]: E0124 00:57:22.406274 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:22.436662 systemd[1]: Removed slice kubepods-besteffort-pod92353cf3_f277_4007_be23_d786dc4f0d1c.slice - libcontainer container kubepods-besteffort-pod92353cf3_f277_4007_be23_d786dc4f0d1c.slice. Jan 24 00:57:22.525491 kubelet[2883]: I0124 00:57:22.525420 2883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-w4lfc" podStartSLOduration=3.507860771 podStartE2EDuration="41.525396347s" podCreationTimestamp="2026-01-24 00:56:41 +0000 UTC" firstStartedPulling="2026-01-24 00:56:42.977646973 +0000 UTC m=+39.726348828" lastFinishedPulling="2026-01-24 00:57:20.995182549 +0000 UTC m=+77.743884404" observedRunningTime="2026-01-24 00:57:22.478132354 +0000 UTC m=+79.226834228" watchObservedRunningTime="2026-01-24 00:57:22.525396347 +0000 UTC m=+79.274098202" Jan 24 00:57:22.867172 systemd[1]: Created slice kubepods-besteffort-podae809202_0be0_4f65_b3c1_0018455a5691.slice - libcontainer container kubepods-besteffort-podae809202_0be0_4f65_b3c1_0018455a5691.slice. Jan 24 00:57:22.976007 systemd-networkd[1507]: calia23d68f9425: Link UP Jan 24 00:57:22.977214 kubelet[2883]: I0124 00:57:22.976530 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae809202-0be0-4f65-b3c1-0018455a5691-whisker-ca-bundle\") pod \"whisker-58d88bd994-v27xr\" (UID: \"ae809202-0be0-4f65-b3c1-0018455a5691\") " pod="calico-system/whisker-58d88bd994-v27xr" Jan 24 00:57:22.977214 kubelet[2883]: I0124 00:57:22.976699 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv92s\" (UniqueName: \"kubernetes.io/projected/ae809202-0be0-4f65-b3c1-0018455a5691-kube-api-access-xv92s\") pod \"whisker-58d88bd994-v27xr\" (UID: \"ae809202-0be0-4f65-b3c1-0018455a5691\") " pod="calico-system/whisker-58d88bd994-v27xr" Jan 24 00:57:22.977214 kubelet[2883]: I0124 00:57:22.976810 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ae809202-0be0-4f65-b3c1-0018455a5691-whisker-backend-key-pair\") pod \"whisker-58d88bd994-v27xr\" (UID: \"ae809202-0be0-4f65-b3c1-0018455a5691\") " pod="calico-system/whisker-58d88bd994-v27xr" Jan 24 00:57:22.980212 systemd-networkd[1507]: calia23d68f9425: Gained carrier Jan 24 00:57:23.046870 containerd[1609]: 2026-01-24 00:57:22.162 [INFO][4190] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:57:23.046870 containerd[1609]: 2026-01-24 00:57:22.190 [INFO][4190] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--2psxn-eth0 coredns-674b8bbfcf- kube-system 40f132ea-961b-46a8-b688-00155d8c6b06 975 0 2026-01-24 00:56:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-2psxn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia23d68f9425 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311" Namespace="kube-system" Pod="coredns-674b8bbfcf-2psxn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2psxn-" Jan 24 00:57:23.046870 containerd[1609]: 2026-01-24 00:57:22.191 [INFO][4190] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311" Namespace="kube-system" Pod="coredns-674b8bbfcf-2psxn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2psxn-eth0" Jan 24 00:57:23.046870 containerd[1609]: 2026-01-24 00:57:22.540 [INFO][4221] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311" HandleID="k8s-pod-network.aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311" Workload="localhost-k8s-coredns--674b8bbfcf--2psxn-eth0" Jan 24 00:57:23.047450 containerd[1609]: 2026-01-24 00:57:22.542 [INFO][4221] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311" HandleID="k8s-pod-network.aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311" Workload="localhost-k8s-coredns--674b8bbfcf--2psxn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a4520), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-2psxn", "timestamp":"2026-01-24 00:57:22.540977518 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:23.047450 containerd[1609]: 2026-01-24 00:57:22.542 [INFO][4221] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:23.047450 containerd[1609]: 2026-01-24 00:57:22.542 [INFO][4221] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:23.047450 containerd[1609]: 2026-01-24 00:57:22.543 [INFO][4221] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:57:23.047450 containerd[1609]: 2026-01-24 00:57:22.591 [INFO][4221] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311" host="localhost" Jan 24 00:57:23.047450 containerd[1609]: 2026-01-24 00:57:22.634 [INFO][4221] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:57:23.047450 containerd[1609]: 2026-01-24 00:57:22.685 [INFO][4221] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:57:23.047450 containerd[1609]: 2026-01-24 00:57:22.710 [INFO][4221] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:23.047450 containerd[1609]: 2026-01-24 00:57:22.787 [INFO][4221] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:23.047450 containerd[1609]: 2026-01-24 00:57:22.802 [INFO][4221] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311" host="localhost" Jan 24 00:57:23.047952 containerd[1609]: 2026-01-24 00:57:22.809 [INFO][4221] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311 Jan 24 00:57:23.047952 containerd[1609]: 2026-01-24 00:57:22.860 [INFO][4221] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311" host="localhost" Jan 24 00:57:23.047952 containerd[1609]: 2026-01-24 00:57:22.909 [INFO][4221] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311" host="localhost" Jan 24 00:57:23.047952 containerd[1609]: 2026-01-24 00:57:22.909 [INFO][4221] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311" host="localhost" Jan 24 00:57:23.047952 containerd[1609]: 2026-01-24 00:57:22.910 [INFO][4221] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:23.047952 containerd[1609]: 2026-01-24 00:57:22.910 [INFO][4221] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311" HandleID="k8s-pod-network.aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311" Workload="localhost-k8s-coredns--674b8bbfcf--2psxn-eth0" Jan 24 00:57:23.048125 containerd[1609]: 2026-01-24 00:57:22.924 [INFO][4190] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311" Namespace="kube-system" Pod="coredns-674b8bbfcf-2psxn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2psxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2psxn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"40f132ea-961b-46a8-b688-00155d8c6b06", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-2psxn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia23d68f9425", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:23.048270 containerd[1609]: 2026-01-24 00:57:22.925 [INFO][4190] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311" Namespace="kube-system" Pod="coredns-674b8bbfcf-2psxn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2psxn-eth0" Jan 24 00:57:23.048270 containerd[1609]: 2026-01-24 00:57:22.925 [INFO][4190] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia23d68f9425 ContainerID="aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311" Namespace="kube-system" Pod="coredns-674b8bbfcf-2psxn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2psxn-eth0" Jan 24 00:57:23.048270 containerd[1609]: 2026-01-24 00:57:22.977 [INFO][4190] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311" Namespace="kube-system" Pod="coredns-674b8bbfcf-2psxn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2psxn-eth0" Jan 24 00:57:23.048369 containerd[1609]: 2026-01-24 00:57:22.978 [INFO][4190] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311" Namespace="kube-system" Pod="coredns-674b8bbfcf-2psxn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2psxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2psxn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"40f132ea-961b-46a8-b688-00155d8c6b06", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311", Pod:"coredns-674b8bbfcf-2psxn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia23d68f9425", MAC:"ae:dc:bb:dc:0d:14", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:23.048369 containerd[1609]: 2026-01-24 00:57:23.035 [INFO][4190] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311" Namespace="kube-system" Pod="coredns-674b8bbfcf-2psxn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2psxn-eth0" Jan 24 00:57:23.125972 systemd-networkd[1507]: calid13cc950b02: Link UP Jan 24 00:57:23.126505 systemd-networkd[1507]: calid13cc950b02: Gained carrier Jan 24 00:57:23.181507 containerd[1609]: time="2026-01-24T00:57:23.181460335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58d88bd994-v27xr,Uid:ae809202-0be0-4f65-b3c1-0018455a5691,Namespace:calico-system,Attempt:0,}" Jan 24 00:57:23.189967 containerd[1609]: 2026-01-24 00:57:22.147 [INFO][4196] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:57:23.189967 containerd[1609]: 2026-01-24 00:57:22.190 [INFO][4196] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--2256s-eth0 goldmane-666569f655- calico-system 9fdbb8ee-a6f4-499c-b584-8b75c3240604 976 0 2026-01-24 00:56:35 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-2256s eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid13cc950b02 [] [] }} ContainerID="c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a" Namespace="calico-system" Pod="goldmane-666569f655-2256s" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2256s-" Jan 24 00:57:23.189967 containerd[1609]: 2026-01-24 00:57:22.190 [INFO][4196] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a" Namespace="calico-system" Pod="goldmane-666569f655-2256s" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2256s-eth0" Jan 24 00:57:23.189967 containerd[1609]: 2026-01-24 00:57:22.540 [INFO][4222] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a" HandleID="k8s-pod-network.c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a" Workload="localhost-k8s-goldmane--666569f655--2256s-eth0" Jan 24 00:57:23.189967 containerd[1609]: 2026-01-24 00:57:22.541 [INFO][4222] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a" HandleID="k8s-pod-network.c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a" Workload="localhost-k8s-goldmane--666569f655--2256s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000186760), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-2256s", "timestamp":"2026-01-24 00:57:22.54094136 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:23.189967 containerd[1609]: 2026-01-24 00:57:22.542 [INFO][4222] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:23.189967 containerd[1609]: 2026-01-24 00:57:22.910 [INFO][4222] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:23.189967 containerd[1609]: 2026-01-24 00:57:22.913 [INFO][4222] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:57:23.189967 containerd[1609]: 2026-01-24 00:57:22.946 [INFO][4222] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a" host="localhost" Jan 24 00:57:23.189967 containerd[1609]: 2026-01-24 00:57:22.982 [INFO][4222] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:57:23.189967 containerd[1609]: 2026-01-24 00:57:23.034 [INFO][4222] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:57:23.189967 containerd[1609]: 2026-01-24 00:57:23.050 [INFO][4222] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:23.189967 containerd[1609]: 2026-01-24 00:57:23.059 [INFO][4222] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:23.189967 containerd[1609]: 2026-01-24 00:57:23.059 [INFO][4222] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a" host="localhost" Jan 24 00:57:23.189967 containerd[1609]: 2026-01-24 00:57:23.064 [INFO][4222] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a Jan 24 00:57:23.189967 containerd[1609]: 2026-01-24 00:57:23.076 [INFO][4222] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a" host="localhost" Jan 24 00:57:23.189967 containerd[1609]: 2026-01-24 00:57:23.103 [INFO][4222] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a" host="localhost" Jan 24 00:57:23.189967 containerd[1609]: 2026-01-24 00:57:23.105 [INFO][4222] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a" host="localhost" Jan 24 00:57:23.189967 containerd[1609]: 2026-01-24 00:57:23.105 [INFO][4222] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:23.189967 containerd[1609]: 2026-01-24 00:57:23.105 [INFO][4222] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a" HandleID="k8s-pod-network.c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a" Workload="localhost-k8s-goldmane--666569f655--2256s-eth0" Jan 24 00:57:23.194438 containerd[1609]: 2026-01-24 00:57:23.116 [INFO][4196] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a" Namespace="calico-system" Pod="goldmane-666569f655-2256s" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2256s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--2256s-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9fdbb8ee-a6f4-499c-b584-8b75c3240604", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-2256s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid13cc950b02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:23.194438 containerd[1609]: 2026-01-24 00:57:23.117 [INFO][4196] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a" Namespace="calico-system" Pod="goldmane-666569f655-2256s" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2256s-eth0" Jan 24 00:57:23.194438 containerd[1609]: 2026-01-24 00:57:23.117 [INFO][4196] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid13cc950b02 ContainerID="c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a" Namespace="calico-system" Pod="goldmane-666569f655-2256s" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2256s-eth0" Jan 24 00:57:23.194438 containerd[1609]: 2026-01-24 00:57:23.124 [INFO][4196] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a" Namespace="calico-system" Pod="goldmane-666569f655-2256s" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2256s-eth0" Jan 24 00:57:23.194438 containerd[1609]: 2026-01-24 00:57:23.124 [INFO][4196] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a" Namespace="calico-system" Pod="goldmane-666569f655-2256s" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2256s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--2256s-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9fdbb8ee-a6f4-499c-b584-8b75c3240604", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a", Pod:"goldmane-666569f655-2256s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid13cc950b02", MAC:"3e:87:26:ab:9a:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:23.194438 containerd[1609]: 2026-01-24 00:57:23.179 [INFO][4196] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a" Namespace="calico-system" Pod="goldmane-666569f655-2256s" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2256s-eth0" Jan 24 00:57:23.257425 containerd[1609]: time="2026-01-24T00:57:23.257080530Z" level=info msg="connecting to shim aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311" address="unix:///run/containerd/s/1877c05f6ec05768665bf8ed2ceef7e1621a9e95df3e2f068d428d19235ce71a" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:57:23.335361 containerd[1609]: time="2026-01-24T00:57:23.335263785Z" level=info msg="connecting to shim c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a" address="unix:///run/containerd/s/d551c76e3b9a10ee6ac4283dce1079dcb94a2c8a6dbb3f9c8d77232922bb6d43" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:57:23.375257 systemd[1]: Started cri-containerd-aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311.scope - libcontainer container aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311. Jan 24 00:57:23.404032 systemd[1]: Started cri-containerd-c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a.scope - libcontainer container c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a. Jan 24 00:57:23.408000 audit: BPF prog-id=175 op=LOAD Jan 24 00:57:23.413928 kubelet[2883]: E0124 00:57:23.412924 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:23.414000 audit: BPF prog-id=176 op=LOAD Jan 24 00:57:23.414000 audit[4322]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=4307 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:23.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161333961323139623831313065346333653735363031343334373733 Jan 24 00:57:23.417000 audit: BPF prog-id=176 op=UNLOAD Jan 24 00:57:23.417000 audit[4322]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4307 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:23.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161333961323139623831313065346333653735363031343334373733 Jan 24 00:57:23.418000 audit: BPF prog-id=177 op=LOAD Jan 24 00:57:23.418000 audit[4322]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=4307 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:23.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161333961323139623831313065346333653735363031343334373733 Jan 24 00:57:23.419000 audit: BPF prog-id=178 op=LOAD Jan 24 00:57:23.419000 audit[4322]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=4307 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:23.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161333961323139623831313065346333653735363031343334373733 Jan 24 00:57:23.419000 audit: BPF prog-id=178 op=UNLOAD Jan 24 00:57:23.419000 audit[4322]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4307 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:23.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161333961323139623831313065346333653735363031343334373733 Jan 24 00:57:23.419000 audit: BPF prog-id=177 op=UNLOAD Jan 24 00:57:23.419000 audit[4322]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4307 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:23.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161333961323139623831313065346333653735363031343334373733 Jan 24 00:57:23.419000 audit: BPF prog-id=179 op=LOAD Jan 24 00:57:23.419000 audit[4322]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=4307 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:23.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161333961323139623831313065346333653735363031343334373733 Jan 24 00:57:23.426062 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:57:23.475000 audit: BPF prog-id=180 op=LOAD Jan 24 00:57:23.478000 audit: BPF prog-id=181 op=LOAD Jan 24 00:57:23.478000 audit[4355]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4339 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:23.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331663761303238333730666439366162623566616361363333656535 Jan 24 00:57:23.478000 audit: BPF prog-id=181 op=UNLOAD Jan 24 00:57:23.478000 audit[4355]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4339 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:23.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331663761303238333730666439366162623566616361363333656535 Jan 24 00:57:23.479000 audit: BPF prog-id=182 op=LOAD Jan 24 00:57:23.479000 audit[4355]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4339 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:23.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331663761303238333730666439366162623566616361363333656535 Jan 24 00:57:23.479000 audit: BPF prog-id=183 op=LOAD Jan 24 00:57:23.479000 audit[4355]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4339 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:23.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331663761303238333730666439366162623566616361363333656535 Jan 24 00:57:23.480000 audit: BPF prog-id=183 op=UNLOAD Jan 24 00:57:23.480000 audit[4355]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4339 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:23.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331663761303238333730666439366162623566616361363333656535 Jan 24 00:57:23.480000 audit: BPF prog-id=182 op=UNLOAD Jan 24 00:57:23.480000 audit[4355]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4339 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:23.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331663761303238333730666439366162623566616361363333656535 Jan 24 00:57:23.480000 audit: BPF prog-id=184 op=LOAD Jan 24 00:57:23.480000 audit[4355]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4339 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:23.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331663761303238333730666439366162623566616361363333656535 Jan 24 00:57:23.484652 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:57:23.579777 containerd[1609]: time="2026-01-24T00:57:23.579176473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2psxn,Uid:40f132ea-961b-46a8-b688-00155d8c6b06,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311\"" Jan 24 00:57:23.589858 kubelet[2883]: E0124 00:57:23.588798 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:23.610818 systemd-networkd[1507]: cali3724a947b42: Link UP Jan 24 00:57:23.614350 systemd-networkd[1507]: cali3724a947b42: Gained carrier Jan 24 00:57:23.615398 containerd[1609]: time="2026-01-24T00:57:23.615175799Z" level=info msg="CreateContainer within sandbox \"aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:57:23.675847 containerd[1609]: 2026-01-24 00:57:23.305 [INFO][4291] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:57:23.675847 containerd[1609]: 2026-01-24 00:57:23.354 [INFO][4291] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--58d88bd994--v27xr-eth0 whisker-58d88bd994- calico-system ae809202-0be0-4f65-b3c1-0018455a5691 1065 0 2026-01-24 00:57:22 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:58d88bd994 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-58d88bd994-v27xr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3724a947b42 [] [] }} ContainerID="8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd" Namespace="calico-system" Pod="whisker-58d88bd994-v27xr" WorkloadEndpoint="localhost-k8s-whisker--58d88bd994--v27xr-" Jan 24 00:57:23.675847 containerd[1609]: 2026-01-24 00:57:23.354 [INFO][4291] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd" Namespace="calico-system" Pod="whisker-58d88bd994-v27xr" WorkloadEndpoint="localhost-k8s-whisker--58d88bd994--v27xr-eth0" Jan 24 00:57:23.675847 containerd[1609]: 2026-01-24 00:57:23.435 [INFO][4362] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd" HandleID="k8s-pod-network.8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd" Workload="localhost-k8s-whisker--58d88bd994--v27xr-eth0" Jan 24 00:57:23.675847 containerd[1609]: 2026-01-24 00:57:23.435 [INFO][4362] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd" HandleID="k8s-pod-network.8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd" Workload="localhost-k8s-whisker--58d88bd994--v27xr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f570), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-58d88bd994-v27xr", "timestamp":"2026-01-24 00:57:23.435054911 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:23.675847 containerd[1609]: 2026-01-24 00:57:23.436 [INFO][4362] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:23.675847 containerd[1609]: 2026-01-24 00:57:23.437 [INFO][4362] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:23.675847 containerd[1609]: 2026-01-24 00:57:23.437 [INFO][4362] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:57:23.675847 containerd[1609]: 2026-01-24 00:57:23.459 [INFO][4362] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd" host="localhost" Jan 24 00:57:23.675847 containerd[1609]: 2026-01-24 00:57:23.480 [INFO][4362] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:57:23.675847 containerd[1609]: 2026-01-24 00:57:23.494 [INFO][4362] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:57:23.675847 containerd[1609]: 2026-01-24 00:57:23.501 [INFO][4362] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:23.675847 containerd[1609]: 2026-01-24 00:57:23.513 [INFO][4362] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:23.675847 containerd[1609]: 2026-01-24 00:57:23.513 [INFO][4362] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd" host="localhost" Jan 24 00:57:23.675847 containerd[1609]: 2026-01-24 00:57:23.521 [INFO][4362] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd Jan 24 00:57:23.675847 containerd[1609]: 2026-01-24 00:57:23.542 [INFO][4362] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd" host="localhost" Jan 24 00:57:23.675847 containerd[1609]: 2026-01-24 00:57:23.571 [INFO][4362] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd" host="localhost" Jan 24 00:57:23.675847 containerd[1609]: 2026-01-24 00:57:23.571 [INFO][4362] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd" host="localhost" Jan 24 00:57:23.675847 containerd[1609]: 2026-01-24 00:57:23.571 [INFO][4362] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:23.675847 containerd[1609]: 2026-01-24 00:57:23.571 [INFO][4362] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd" HandleID="k8s-pod-network.8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd" Workload="localhost-k8s-whisker--58d88bd994--v27xr-eth0" Jan 24 00:57:23.687033 containerd[1609]: 2026-01-24 00:57:23.603 [INFO][4291] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd" Namespace="calico-system" Pod="whisker-58d88bd994-v27xr" WorkloadEndpoint="localhost-k8s-whisker--58d88bd994--v27xr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--58d88bd994--v27xr-eth0", GenerateName:"whisker-58d88bd994-", Namespace:"calico-system", SelfLink:"", UID:"ae809202-0be0-4f65-b3c1-0018455a5691", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58d88bd994", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-58d88bd994-v27xr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3724a947b42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:23.687033 containerd[1609]: 2026-01-24 00:57:23.603 [INFO][4291] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd" Namespace="calico-system" Pod="whisker-58d88bd994-v27xr" WorkloadEndpoint="localhost-k8s-whisker--58d88bd994--v27xr-eth0" Jan 24 00:57:23.687033 containerd[1609]: 2026-01-24 00:57:23.603 [INFO][4291] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3724a947b42 ContainerID="8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd" Namespace="calico-system" Pod="whisker-58d88bd994-v27xr" WorkloadEndpoint="localhost-k8s-whisker--58d88bd994--v27xr-eth0" Jan 24 00:57:23.687033 containerd[1609]: 2026-01-24 00:57:23.612 [INFO][4291] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd" Namespace="calico-system" Pod="whisker-58d88bd994-v27xr" WorkloadEndpoint="localhost-k8s-whisker--58d88bd994--v27xr-eth0" Jan 24 00:57:23.687033 containerd[1609]: 2026-01-24 00:57:23.612 [INFO][4291] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd" Namespace="calico-system" Pod="whisker-58d88bd994-v27xr" WorkloadEndpoint="localhost-k8s-whisker--58d88bd994--v27xr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--58d88bd994--v27xr-eth0", GenerateName:"whisker-58d88bd994-", Namespace:"calico-system", SelfLink:"", UID:"ae809202-0be0-4f65-b3c1-0018455a5691", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58d88bd994", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd", Pod:"whisker-58d88bd994-v27xr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3724a947b42", MAC:"8a:a8:27:30:79:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:23.687033 containerd[1609]: 2026-01-24 00:57:23.669 [INFO][4291] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd" Namespace="calico-system" Pod="whisker-58d88bd994-v27xr" WorkloadEndpoint="localhost-k8s-whisker--58d88bd994--v27xr-eth0" Jan 24 00:57:23.687033 containerd[1609]: time="2026-01-24T00:57:23.680100464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2256s,Uid:9fdbb8ee-a6f4-499c-b584-8b75c3240604,Namespace:calico-system,Attempt:0,} returns sandbox id \"c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a\"" Jan 24 00:57:23.691024 containerd[1609]: time="2026-01-24T00:57:23.687919213Z" level=info msg="Container 346ee64a66fe5d3e81e9967000a670c409b32bdc2b312edf47e2d4d4f094f050: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:57:23.720765 containerd[1609]: time="2026-01-24T00:57:23.720557637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:57:23.724344 containerd[1609]: time="2026-01-24T00:57:23.724229125Z" level=info msg="CreateContainer within sandbox \"aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"346ee64a66fe5d3e81e9967000a670c409b32bdc2b312edf47e2d4d4f094f050\"" Jan 24 00:57:23.734779 containerd[1609]: time="2026-01-24T00:57:23.732961656Z" level=info msg="StartContainer for \"346ee64a66fe5d3e81e9967000a670c409b32bdc2b312edf47e2d4d4f094f050\"" Jan 24 00:57:23.740668 containerd[1609]: time="2026-01-24T00:57:23.740520232Z" level=info msg="connecting to shim 346ee64a66fe5d3e81e9967000a670c409b32bdc2b312edf47e2d4d4f094f050" address="unix:///run/containerd/s/1877c05f6ec05768665bf8ed2ceef7e1621a9e95df3e2f068d428d19235ce71a" protocol=ttrpc version=3 Jan 24 00:57:23.806483 containerd[1609]: time="2026-01-24T00:57:23.805574900Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:23.813388 systemd[1]: Started cri-containerd-346ee64a66fe5d3e81e9967000a670c409b32bdc2b312edf47e2d4d4f094f050.scope - libcontainer container 346ee64a66fe5d3e81e9967000a670c409b32bdc2b312edf47e2d4d4f094f050. Jan 24 00:57:23.825592 containerd[1609]: time="2026-01-24T00:57:23.825473696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:23.826276 containerd[1609]: time="2026-01-24T00:57:23.826115371Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:57:23.827037 kubelet[2883]: E0124 00:57:23.826863 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:57:23.827146 kubelet[2883]: E0124 00:57:23.827044 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:57:23.827545 kubelet[2883]: E0124 00:57:23.827451 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lxh5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2256s_calico-system(9fdbb8ee-a6f4-499c-b584-8b75c3240604): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:23.829377 kubelet[2883]: E0124 00:57:23.829138 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 00:57:23.875099 containerd[1609]: time="2026-01-24T00:57:23.874818798Z" level=info msg="connecting to shim 8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd" address="unix:///run/containerd/s/52a17e24fc3238891abf746c28dcfdf13330046490f3e6d4d8cbed95c3a0709e" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:57:23.900000 audit: BPF prog-id=185 op=LOAD Jan 24 00:57:23.902000 audit: BPF prog-id=186 op=LOAD Jan 24 00:57:23.902000 audit[4528]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4307 pid=4528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:23.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334366565363461363666653564336538316539393637303030613637 Jan 24 00:57:23.902000 audit: BPF prog-id=186 op=UNLOAD Jan 24 00:57:23.902000 audit[4528]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4307 pid=4528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:23.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334366565363461363666653564336538316539393637303030613637 Jan 24 00:57:23.908000 audit: BPF prog-id=187 op=LOAD Jan 24 00:57:23.908000 audit[4528]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4307 pid=4528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:23.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334366565363461363666653564336538316539393637303030613637 Jan 24 00:57:23.909000 audit: BPF prog-id=188 op=LOAD Jan 24 00:57:23.909000 audit[4528]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4307 pid=4528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:23.909000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334366565363461363666653564336538316539393637303030613637 Jan 24 00:57:23.909000 audit: BPF prog-id=188 op=UNLOAD Jan 24 00:57:23.909000 audit[4528]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4307 pid=4528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:23.909000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334366565363461363666653564336538316539393637303030613637 Jan 24 00:57:23.909000 audit: BPF prog-id=187 op=UNLOAD Jan 24 00:57:23.909000 audit[4528]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4307 pid=4528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:23.909000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334366565363461363666653564336538316539393637303030613637 Jan 24 00:57:23.909000 audit: BPF prog-id=189 op=LOAD Jan 24 00:57:23.909000 audit[4528]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4307 pid=4528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:23.909000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334366565363461363666653564336538316539393637303030613637 Jan 24 00:57:24.046384 kubelet[2883]: I0124 00:57:24.046207 2883 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92353cf3-f277-4007-be23-d786dc4f0d1c" path="/var/lib/kubelet/pods/92353cf3-f277-4007-be23-d786dc4f0d1c/volumes" Jan 24 00:57:24.060108 containerd[1609]: time="2026-01-24T00:57:24.056552982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rkd9m,Uid:e6e0379d-4209-43c1-9c94-53533c368367,Namespace:calico-system,Attempt:0,}" Jan 24 00:57:24.101239 systemd[1]: Started cri-containerd-8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd.scope - libcontainer container 8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd. Jan 24 00:57:24.203779 containerd[1609]: time="2026-01-24T00:57:24.203532164Z" level=info msg="StartContainer for \"346ee64a66fe5d3e81e9967000a670c409b32bdc2b312edf47e2d4d4f094f050\" returns successfully" Jan 24 00:57:24.233000 audit: BPF prog-id=190 op=LOAD Jan 24 00:57:24.244000 audit: BPF prog-id=191 op=LOAD Jan 24 00:57:24.244000 audit[4570]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00020c238 a2=98 a3=0 items=0 ppid=4559 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:24.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303036313164386561653061336664353964623364323739336338 Jan 24 00:57:24.263000 audit: BPF prog-id=191 op=UNLOAD Jan 24 00:57:24.263000 audit[4570]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4559 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:24.263000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303036313164386561653061336664353964623364323739336338 Jan 24 00:57:24.264000 audit: BPF prog-id=192 op=LOAD Jan 24 00:57:24.264000 audit[4570]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00020c488 a2=98 a3=0 items=0 ppid=4559 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:24.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303036313164386561653061336664353964623364323739336338 Jan 24 00:57:24.266000 audit: BPF prog-id=193 op=LOAD Jan 24 00:57:24.266000 audit[4570]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00020c218 a2=98 a3=0 items=0 ppid=4559 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:24.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303036313164386561653061336664353964623364323739336338 Jan 24 00:57:24.266000 audit: BPF prog-id=193 op=UNLOAD Jan 24 00:57:24.266000 audit[4570]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4559 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:24.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303036313164386561653061336664353964623364323739336338 Jan 24 00:57:24.266000 audit: BPF prog-id=192 op=UNLOAD Jan 24 00:57:24.266000 audit[4570]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4559 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:24.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303036313164386561653061336664353964623364323739336338 Jan 24 00:57:24.268000 audit: BPF prog-id=194 op=LOAD Jan 24 00:57:24.268000 audit[4570]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00020c6e8 a2=98 a3=0 items=0 ppid=4559 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:24.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303036313164386561653061336664353964623364323739336338 Jan 24 00:57:24.276067 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:57:24.458813 kubelet[2883]: E0124 00:57:24.457526 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 00:57:24.472831 kubelet[2883]: E0124 00:57:24.472686 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:24.481352 systemd-networkd[1507]: calid13cc950b02: Gained IPv6LL Jan 24 00:57:24.496550 containerd[1609]: time="2026-01-24T00:57:24.496497024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58d88bd994-v27xr,Uid:ae809202-0be0-4f65-b3c1-0018455a5691,Namespace:calico-system,Attempt:0,} returns sandbox id \"8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd\"" Jan 24 00:57:24.529183 containerd[1609]: time="2026-01-24T00:57:24.528200404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:57:24.575109 kubelet[2883]: I0124 00:57:24.574911 2883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2psxn" podStartSLOduration=78.574889798 podStartE2EDuration="1m18.574889798s" podCreationTimestamp="2026-01-24 00:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:57:24.570472895 +0000 UTC m=+81.319174750" watchObservedRunningTime="2026-01-24 00:57:24.574889798 +0000 UTC m=+81.323591653" Jan 24 00:57:24.588000 audit[4641]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=4641 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:24.588000 audit[4641]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe79d29390 a2=0 a3=7ffe79d2937c items=0 ppid=3048 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:24.588000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:57:24.604000 audit[4641]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=4641 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:24.604000 audit[4641]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe79d29390 a2=0 a3=0 items=0 ppid=3048 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:24.604000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:57:24.624015 containerd[1609]: time="2026-01-24T00:57:24.623839498Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:24.625771 containerd[1609]: time="2026-01-24T00:57:24.625525809Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:57:24.626053 containerd[1609]: time="2026-01-24T00:57:24.625794930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:24.626886 kubelet[2883]: E0124 00:57:24.626502 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:57:24.626886 kubelet[2883]: E0124 00:57:24.626856 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:57:24.627962 kubelet[2883]: E0124 00:57:24.627582 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e3b1510d29974db1a1191d4e38011034,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xv92s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58d88bd994-v27xr_calico-system(ae809202-0be0-4f65-b3c1-0018455a5691): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:24.651060 containerd[1609]: time="2026-01-24T00:57:24.650662294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:57:24.674801 systemd-networkd[1507]: calia23d68f9425: Gained IPv6LL Jan 24 00:57:24.689000 audit[4656]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=4656 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:24.689000 audit[4656]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcb9c14a90 a2=0 a3=7ffcb9c14a7c items=0 ppid=3048 pid=4656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:24.689000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:57:24.699000 audit[4656]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=4656 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:24.699000 audit[4656]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcb9c14a90 a2=0 a3=0 items=0 ppid=3048 pid=4656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:24.699000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:57:24.794528 containerd[1609]: time="2026-01-24T00:57:24.793841613Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:24.809887 containerd[1609]: time="2026-01-24T00:57:24.809694504Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:57:24.809887 containerd[1609]: time="2026-01-24T00:57:24.809877485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:24.810297 kubelet[2883]: E0124 00:57:24.810248 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:57:24.811050 kubelet[2883]: E0124 00:57:24.810947 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:57:24.812028 kubelet[2883]: E0124 00:57:24.811958 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xv92s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58d88bd994-v27xr_calico-system(ae809202-0be0-4f65-b3c1-0018455a5691): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:24.813544 kubelet[2883]: E0124 00:57:24.813475 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691" Jan 24 00:57:24.916000 audit: BPF prog-id=195 op=LOAD Jan 24 00:57:24.916000 audit[4675]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdf4a07420 a2=98 a3=1fffffffffffffff items=0 ppid=4429 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:24.916000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:57:24.916000 audit: BPF prog-id=195 op=UNLOAD Jan 24 00:57:24.916000 audit[4675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdf4a073f0 a3=0 items=0 ppid=4429 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:24.916000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:57:24.916000 audit: BPF prog-id=196 op=LOAD Jan 24 00:57:24.916000 audit[4675]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdf4a07300 a2=94 a3=3 items=0 ppid=4429 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:24.916000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:57:24.916000 audit: BPF prog-id=196 op=UNLOAD Jan 24 00:57:24.916000 audit[4675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdf4a07300 a2=94 a3=3 items=0 ppid=4429 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:24.916000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:57:24.916000 audit: BPF prog-id=197 op=LOAD Jan 24 00:57:24.916000 audit[4675]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdf4a07340 a2=94 a3=7ffdf4a07520 items=0 ppid=4429 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:24.916000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:57:24.916000 audit: BPF prog-id=197 op=UNLOAD Jan 24 00:57:24.916000 audit[4675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdf4a07340 a2=94 a3=7ffdf4a07520 items=0 ppid=4429 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:24.916000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:57:24.923000 audit: BPF prog-id=198 op=LOAD Jan 24 00:57:24.923000 audit[4676]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffedd431150 a2=98 a3=3 items=0 ppid=4429 pid=4676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:24.923000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:57:24.923000 audit: BPF prog-id=198 op=UNLOAD Jan 24 00:57:24.923000 audit[4676]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffedd431120 a3=0 items=0 ppid=4429 pid=4676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:24.923000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:57:24.927000 audit: BPF prog-id=199 op=LOAD Jan 24 00:57:24.927000 audit[4676]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffedd430f40 a2=94 a3=54428f items=0 ppid=4429 pid=4676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:24.927000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:57:24.927000 audit: BPF prog-id=199 op=UNLOAD Jan 24 00:57:24.927000 audit[4676]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffedd430f40 a2=94 a3=54428f items=0 ppid=4429 pid=4676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:24.927000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:57:24.927000 audit: BPF prog-id=200 op=LOAD Jan 24 00:57:24.927000 audit[4676]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffedd430f70 a2=94 a3=2 items=0 ppid=4429 pid=4676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:24.927000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:57:24.927000 audit: BPF prog-id=200 op=UNLOAD Jan 24 00:57:24.927000 audit[4676]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffedd430f70 a2=0 a3=2 items=0 ppid=4429 pid=4676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:24.927000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:57:25.122506 systemd-networkd[1507]: calibfa591704ec: Link UP Jan 24 00:57:25.124458 systemd-networkd[1507]: calibfa591704ec: Gained carrier Jan 24 00:57:25.193212 containerd[1609]: 2026-01-24 00:57:24.453 [INFO][4590] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:57:25.193212 containerd[1609]: 2026-01-24 00:57:24.558 [INFO][4590] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--rkd9m-eth0 csi-node-driver- calico-system e6e0379d-4209-43c1-9c94-53533c368367 825 0 2026-01-24 00:56:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-rkd9m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibfa591704ec [] [] }} ContainerID="15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b" Namespace="calico-system" Pod="csi-node-driver-rkd9m" WorkloadEndpoint="localhost-k8s-csi--node--driver--rkd9m-" Jan 24 00:57:25.193212 containerd[1609]: 2026-01-24 00:57:24.558 [INFO][4590] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b" Namespace="calico-system" Pod="csi-node-driver-rkd9m" WorkloadEndpoint="localhost-k8s-csi--node--driver--rkd9m-eth0" Jan 24 00:57:25.193212 containerd[1609]: 2026-01-24 00:57:24.818 [INFO][4648] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b" HandleID="k8s-pod-network.15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b" Workload="localhost-k8s-csi--node--driver--rkd9m-eth0" Jan 24 00:57:25.193212 containerd[1609]: 2026-01-24 00:57:24.820 [INFO][4648] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b" HandleID="k8s-pod-network.15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b" Workload="localhost-k8s-csi--node--driver--rkd9m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e390), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-rkd9m", "timestamp":"2026-01-24 00:57:24.818328653 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:25.193212 containerd[1609]: 2026-01-24 00:57:24.820 [INFO][4648] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:25.193212 containerd[1609]: 2026-01-24 00:57:24.821 [INFO][4648] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:25.193212 containerd[1609]: 2026-01-24 00:57:24.821 [INFO][4648] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:57:25.193212 containerd[1609]: 2026-01-24 00:57:24.857 [INFO][4648] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b" host="localhost" Jan 24 00:57:25.193212 containerd[1609]: 2026-01-24 00:57:24.903 [INFO][4648] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:57:25.193212 containerd[1609]: 2026-01-24 00:57:24.973 [INFO][4648] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:57:25.193212 containerd[1609]: 2026-01-24 00:57:24.989 [INFO][4648] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:25.193212 containerd[1609]: 2026-01-24 00:57:25.009 [INFO][4648] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:25.193212 containerd[1609]: 2026-01-24 00:57:25.009 [INFO][4648] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b" host="localhost" Jan 24 00:57:25.193212 containerd[1609]: 2026-01-24 00:57:25.023 [INFO][4648] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b Jan 24 00:57:25.193212 containerd[1609]: 2026-01-24 00:57:25.050 [INFO][4648] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b" host="localhost" Jan 24 00:57:25.193212 containerd[1609]: 2026-01-24 00:57:25.090 [INFO][4648] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b" host="localhost" Jan 24 00:57:25.193212 containerd[1609]: 2026-01-24 00:57:25.090 [INFO][4648] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b" host="localhost" Jan 24 00:57:25.193212 containerd[1609]: 2026-01-24 00:57:25.090 [INFO][4648] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:25.193212 containerd[1609]: 2026-01-24 00:57:25.090 [INFO][4648] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b" HandleID="k8s-pod-network.15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b" Workload="localhost-k8s-csi--node--driver--rkd9m-eth0" Jan 24 00:57:25.199775 containerd[1609]: 2026-01-24 00:57:25.103 [INFO][4590] cni-plugin/k8s.go 418: Populated endpoint ContainerID="15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b" Namespace="calico-system" Pod="csi-node-driver-rkd9m" WorkloadEndpoint="localhost-k8s-csi--node--driver--rkd9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rkd9m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e6e0379d-4209-43c1-9c94-53533c368367", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-rkd9m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibfa591704ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:25.199775 containerd[1609]: 2026-01-24 00:57:25.103 [INFO][4590] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b" Namespace="calico-system" Pod="csi-node-driver-rkd9m" WorkloadEndpoint="localhost-k8s-csi--node--driver--rkd9m-eth0" Jan 24 00:57:25.199775 containerd[1609]: 2026-01-24 00:57:25.103 [INFO][4590] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibfa591704ec ContainerID="15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b" Namespace="calico-system" Pod="csi-node-driver-rkd9m" WorkloadEndpoint="localhost-k8s-csi--node--driver--rkd9m-eth0" Jan 24 00:57:25.199775 containerd[1609]: 2026-01-24 00:57:25.129 [INFO][4590] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b" Namespace="calico-system" Pod="csi-node-driver-rkd9m" WorkloadEndpoint="localhost-k8s-csi--node--driver--rkd9m-eth0" Jan 24 00:57:25.199775 containerd[1609]: 2026-01-24 00:57:25.129 [INFO][4590] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b" Namespace="calico-system" Pod="csi-node-driver-rkd9m" WorkloadEndpoint="localhost-k8s-csi--node--driver--rkd9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rkd9m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e6e0379d-4209-43c1-9c94-53533c368367", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b", Pod:"csi-node-driver-rkd9m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibfa591704ec", MAC:"8e:7f:ec:d0:b8:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:25.199775 containerd[1609]: 2026-01-24 00:57:25.172 [INFO][4590] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b" Namespace="calico-system" Pod="csi-node-driver-rkd9m" WorkloadEndpoint="localhost-k8s-csi--node--driver--rkd9m-eth0" Jan 24 00:57:25.372782 containerd[1609]: time="2026-01-24T00:57:25.371544609Z" level=info msg="connecting to shim 15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b" address="unix:///run/containerd/s/50e22b837a69bddcc20a31cbca99845dc60ae76dd4a702a826df894f610b85f7" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:57:25.500373 kubelet[2883]: E0124 00:57:25.500247 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:25.506039 kubelet[2883]: E0124 00:57:25.504033 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 00:57:25.506039 kubelet[2883]: E0124 00:57:25.505287 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691" Jan 24 00:57:25.532140 systemd-networkd[1507]: cali3724a947b42: Gained IPv6LL Jan 24 00:57:25.552065 systemd[1]: Started cri-containerd-15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b.scope - libcontainer container 15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b. Jan 24 00:57:25.582000 audit: BPF prog-id=201 op=LOAD Jan 24 00:57:25.582000 audit[4676]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffedd430e30 a2=94 a3=1 items=0 ppid=4429 pid=4676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.582000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:57:25.582000 audit: BPF prog-id=201 op=UNLOAD Jan 24 00:57:25.582000 audit[4676]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffedd430e30 a2=94 a3=1 items=0 ppid=4429 pid=4676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.582000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:57:25.615000 audit: BPF prog-id=202 op=LOAD Jan 24 00:57:25.615000 audit[4676]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffedd430e20 a2=94 a3=4 items=0 ppid=4429 pid=4676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.615000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:57:25.616000 audit: BPF prog-id=202 op=UNLOAD Jan 24 00:57:25.616000 audit[4676]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffedd430e20 a2=0 a3=4 items=0 ppid=4429 pid=4676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.616000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:57:25.617000 audit: BPF prog-id=203 op=LOAD Jan 24 00:57:25.617000 audit[4676]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffedd430c80 a2=94 a3=5 items=0 ppid=4429 pid=4676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.617000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:57:25.617000 audit: BPF prog-id=203 op=UNLOAD Jan 24 00:57:25.617000 audit[4676]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffedd430c80 a2=0 a3=5 items=0 ppid=4429 pid=4676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.617000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:57:25.617000 audit: BPF prog-id=204 op=LOAD Jan 24 00:57:25.617000 audit[4676]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffedd430ea0 a2=94 a3=6 items=0 ppid=4429 pid=4676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.617000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:57:25.617000 audit: BPF prog-id=204 op=UNLOAD Jan 24 00:57:25.617000 audit[4676]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffedd430ea0 a2=0 a3=6 items=0 ppid=4429 pid=4676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.617000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:57:25.619000 audit: BPF prog-id=205 op=LOAD Jan 24 00:57:25.619000 audit[4676]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffedd430650 a2=94 a3=88 items=0 ppid=4429 pid=4676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.619000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:57:25.619000 audit: BPF prog-id=206 op=LOAD Jan 24 00:57:25.619000 audit[4676]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffedd4304d0 a2=94 a3=2 items=0 ppid=4429 pid=4676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.619000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:57:25.619000 audit: BPF prog-id=206 op=UNLOAD Jan 24 00:57:25.619000 audit[4676]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffedd430500 a2=0 a3=7ffedd430600 items=0 ppid=4429 pid=4676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.619000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:57:25.620000 audit: BPF prog-id=205 op=UNLOAD Jan 24 00:57:25.620000 audit[4676]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2fca5d10 a2=0 a3=65efc62258d2e712 items=0 ppid=4429 pid=4676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.620000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:57:25.688000 audit: BPF prog-id=207 op=LOAD Jan 24 00:57:25.694000 audit: BPF prog-id=208 op=LOAD Jan 24 00:57:25.694000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4695 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135363630653731303362383830633666306236396662623861333432 Jan 24 00:57:25.699000 audit: BPF prog-id=208 op=UNLOAD Jan 24 00:57:25.699000 audit[4706]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4695 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.699000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135363630653731303362383830633666306236396662623861333432 Jan 24 00:57:25.708000 audit: BPF prog-id=209 op=LOAD Jan 24 00:57:25.708000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4695 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.708000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135363630653731303362383830633666306236396662623861333432 Jan 24 00:57:25.711000 audit: BPF prog-id=210 op=LOAD Jan 24 00:57:25.711000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4695 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135363630653731303362383830633666306236396662623861333432 Jan 24 00:57:25.712000 audit: BPF prog-id=210 op=UNLOAD Jan 24 00:57:25.712000 audit: BPF prog-id=211 op=LOAD Jan 24 00:57:25.712000 audit[4732]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe988a3750 a2=98 a3=1999999999999999 items=0 ppid=4429 pid=4732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.712000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:57:25.712000 audit: BPF prog-id=211 op=UNLOAD Jan 24 00:57:25.712000 audit[4732]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe988a3720 a3=0 items=0 ppid=4429 pid=4732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.712000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:57:25.712000 audit: BPF prog-id=212 op=LOAD Jan 24 00:57:25.712000 audit[4706]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4695 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135363630653731303362383830633666306236396662623861333432 Jan 24 00:57:25.712000 audit[4732]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe988a3630 a2=94 a3=ffff items=0 ppid=4429 pid=4732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.712000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:57:25.712000 audit: BPF prog-id=212 op=UNLOAD Jan 24 00:57:25.712000 audit[4732]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe988a3630 a2=94 a3=ffff items=0 ppid=4429 pid=4732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.712000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:57:25.712000 audit: BPF prog-id=213 op=LOAD Jan 24 00:57:25.712000 audit[4732]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe988a3670 a2=94 a3=7ffe988a3850 items=0 ppid=4429 pid=4732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.712000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:57:25.714000 audit: BPF prog-id=213 op=UNLOAD Jan 24 00:57:25.714000 audit[4732]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe988a3670 a2=94 a3=7ffe988a3850 items=0 ppid=4429 pid=4732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.714000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:57:25.715000 audit: BPF prog-id=209 op=UNLOAD Jan 24 00:57:25.715000 audit[4706]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4695 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135363630653731303362383830633666306236396662623861333432 Jan 24 00:57:25.720000 audit: BPF prog-id=214 op=LOAD Jan 24 00:57:25.720000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4695 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.720000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135363630653731303362383830633666306236396662623861333432 Jan 24 00:57:25.723000 audit[4731]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=4731 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:25.723000 audit[4731]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcf0d3e6f0 a2=0 a3=7ffcf0d3e6dc items=0 ppid=3048 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.723000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:57:25.729451 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:57:25.750000 audit[4731]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=4731 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:25.750000 audit[4731]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcf0d3e6f0 a2=0 a3=0 items=0 ppid=3048 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:25.750000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:57:25.874318 containerd[1609]: time="2026-01-24T00:57:25.874152793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rkd9m,Uid:e6e0379d-4209-43c1-9c94-53533c368367,Namespace:calico-system,Attempt:0,} returns sandbox id \"15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b\"" Jan 24 00:57:25.902204 containerd[1609]: time="2026-01-24T00:57:25.902117918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:57:26.000784 containerd[1609]: time="2026-01-24T00:57:26.000100818Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:26.005512 containerd[1609]: time="2026-01-24T00:57:26.004700735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:26.005512 containerd[1609]: time="2026-01-24T00:57:26.004883465Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:57:26.009994 kubelet[2883]: E0124 00:57:26.009843 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:57:26.009994 kubelet[2883]: E0124 00:57:26.009945 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:57:26.010507 kubelet[2883]: E0124 00:57:26.010116 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mh58h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rkd9m_calico-system(e6e0379d-4209-43c1-9c94-53533c368367): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:26.013662 containerd[1609]: time="2026-01-24T00:57:26.013574844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:57:26.080800 systemd-networkd[1507]: vxlan.calico: Link UP Jan 24 00:57:26.080815 systemd-networkd[1507]: vxlan.calico: Gained carrier Jan 24 00:57:26.104845 containerd[1609]: time="2026-01-24T00:57:26.102428163Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:26.115559 containerd[1609]: time="2026-01-24T00:57:26.113966871Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:57:26.115825 containerd[1609]: time="2026-01-24T00:57:26.114056459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:26.116411 kubelet[2883]: E0124 00:57:26.116362 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:57:26.116553 kubelet[2883]: E0124 00:57:26.116523 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:57:26.117322 kubelet[2883]: E0124 00:57:26.117266 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mh58h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rkd9m_calico-system(e6e0379d-4209-43c1-9c94-53533c368367): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:26.120005 kubelet[2883]: E0124 00:57:26.119924 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:57:26.195000 audit: BPF prog-id=215 op=LOAD Jan 24 00:57:26.195000 audit[4764]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff8450f040 a2=98 a3=0 items=0 ppid=4429 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.195000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:57:26.195000 audit: BPF prog-id=215 op=UNLOAD Jan 24 00:57:26.195000 audit[4764]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff8450f010 a3=0 items=0 ppid=4429 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.195000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:57:26.195000 audit: BPF prog-id=216 op=LOAD Jan 24 00:57:26.195000 audit[4764]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff8450ee50 a2=94 a3=54428f items=0 ppid=4429 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.195000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:57:26.195000 audit: BPF prog-id=216 op=UNLOAD Jan 24 00:57:26.195000 audit[4764]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff8450ee50 a2=94 a3=54428f items=0 ppid=4429 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.195000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:57:26.195000 audit: BPF prog-id=217 op=LOAD Jan 24 00:57:26.195000 audit[4764]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff8450ee80 a2=94 a3=2 items=0 ppid=4429 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.195000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:57:26.195000 audit: BPF prog-id=217 op=UNLOAD Jan 24 00:57:26.195000 audit[4764]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff8450ee80 a2=0 a3=2 items=0 ppid=4429 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.195000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:57:26.196000 audit: BPF prog-id=218 op=LOAD Jan 24 00:57:26.196000 audit[4764]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff8450ec30 a2=94 a3=4 items=0 ppid=4429 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.196000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:57:26.196000 audit: BPF prog-id=218 op=UNLOAD Jan 24 00:57:26.196000 audit[4764]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff8450ec30 a2=94 a3=4 items=0 ppid=4429 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.196000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:57:26.196000 audit: BPF prog-id=219 op=LOAD Jan 24 00:57:26.196000 audit[4764]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff8450ed30 a2=94 a3=7fff8450eeb0 items=0 ppid=4429 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.196000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:57:26.196000 audit: BPF prog-id=219 op=UNLOAD Jan 24 00:57:26.196000 audit[4764]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff8450ed30 a2=0 a3=7fff8450eeb0 items=0 ppid=4429 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.196000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:57:26.197000 audit: BPF prog-id=220 op=LOAD Jan 24 00:57:26.197000 audit[4764]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff8450e460 a2=94 a3=2 items=0 ppid=4429 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.197000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:57:26.197000 audit: BPF prog-id=220 op=UNLOAD Jan 24 00:57:26.197000 audit[4764]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff8450e460 a2=0 a3=2 items=0 ppid=4429 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.197000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:57:26.197000 audit: BPF prog-id=221 op=LOAD Jan 24 00:57:26.197000 audit[4764]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff8450e560 a2=94 a3=30 items=0 ppid=4429 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.197000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:57:26.222000 audit: BPF prog-id=222 op=LOAD Jan 24 00:57:26.222000 audit[4774]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd6055aea0 a2=98 a3=0 items=0 ppid=4429 pid=4774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.222000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:26.222000 audit: BPF prog-id=222 op=UNLOAD Jan 24 00:57:26.222000 audit[4774]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd6055ae70 a3=0 items=0 ppid=4429 pid=4774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.222000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:26.222000 audit: BPF prog-id=223 op=LOAD Jan 24 00:57:26.222000 audit[4774]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd6055ac90 a2=94 a3=54428f items=0 ppid=4429 pid=4774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.222000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:26.223000 audit: BPF prog-id=223 op=UNLOAD Jan 24 00:57:26.223000 audit[4774]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd6055ac90 a2=94 a3=54428f items=0 ppid=4429 pid=4774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:26.223000 audit: BPF prog-id=224 op=LOAD Jan 24 00:57:26.223000 audit[4774]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd6055acc0 a2=94 a3=2 items=0 ppid=4429 pid=4774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:26.223000 audit: BPF prog-id=224 op=UNLOAD Jan 24 00:57:26.223000 audit[4774]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd6055acc0 a2=0 a3=2 items=0 ppid=4429 pid=4774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:26.346885 systemd-networkd[1507]: calibfa591704ec: Gained IPv6LL Jan 24 00:57:26.513837 kubelet[2883]: E0124 00:57:26.512517 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:26.517450 kubelet[2883]: E0124 00:57:26.517225 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:57:26.518676 kubelet[2883]: E0124 00:57:26.518578 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691" Jan 24 00:57:26.825000 audit[4776]: NETFILTER_CFG table=filter:131 family=2 entries=17 op=nft_register_rule pid=4776 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:26.852405 kernel: kauditd_printk_skb: 280 callbacks suppressed Jan 24 00:57:26.852595 kernel: audit: type=1325 audit(1769216246.825:687): table=filter:131 family=2 entries=17 op=nft_register_rule pid=4776 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:26.825000 audit[4776]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffceab97160 a2=0 a3=7ffceab9714c items=0 ppid=3048 pid=4776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.896950 kernel: audit: type=1300 audit(1769216246.825:687): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffceab97160 a2=0 a3=7ffceab9714c items=0 ppid=3048 pid=4776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.897109 kernel: audit: type=1327 audit(1769216246.825:687): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:57:26.825000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:57:26.906196 kernel: audit: type=1334 audit(1769216246.886:688): prog-id=225 op=LOAD Jan 24 00:57:26.886000 audit: BPF prog-id=225 op=LOAD Jan 24 00:57:26.886000 audit[4774]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd6055ab80 a2=94 a3=1 items=0 ppid=4429 pid=4774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.941413 kernel: audit: type=1300 audit(1769216246.886:688): arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd6055ab80 a2=94 a3=1 items=0 ppid=4429 pid=4774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.941560 kernel: audit: type=1327 audit(1769216246.886:688): proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:26.886000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:26.967763 kernel: audit: type=1334 audit(1769216246.886:689): prog-id=225 op=UNLOAD Jan 24 00:57:26.886000 audit: BPF prog-id=225 op=UNLOAD Jan 24 00:57:26.886000 audit[4774]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd6055ab80 a2=94 a3=1 items=0 ppid=4429 pid=4774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.998802 kernel: audit: type=1300 audit(1769216246.886:689): arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd6055ab80 a2=94 a3=1 items=0 ppid=4429 pid=4774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.998934 kernel: audit: type=1327 audit(1769216246.886:689): proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:26.886000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:27.021316 kernel: audit: type=1325 audit(1769216246.905:690): table=nat:132 family=2 entries=35 op=nft_register_chain pid=4776 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:26.905000 audit[4776]: NETFILTER_CFG table=nat:132 family=2 entries=35 op=nft_register_chain pid=4776 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:26.905000 audit[4776]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffceab97160 a2=0 a3=7ffceab9714c items=0 ppid=3048 pid=4776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.905000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:57:26.916000 audit: BPF prog-id=226 op=LOAD Jan 24 00:57:26.916000 audit[4774]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd6055ab70 a2=94 a3=4 items=0 ppid=4429 pid=4774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.916000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:26.916000 audit: BPF prog-id=226 op=UNLOAD Jan 24 00:57:26.916000 audit[4774]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd6055ab70 a2=0 a3=4 items=0 ppid=4429 pid=4774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.916000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:26.916000 audit: BPF prog-id=227 op=LOAD Jan 24 00:57:26.916000 audit[4774]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd6055a9d0 a2=94 a3=5 items=0 ppid=4429 pid=4774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.916000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:26.916000 audit: BPF prog-id=227 op=UNLOAD Jan 24 00:57:26.916000 audit[4774]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd6055a9d0 a2=0 a3=5 items=0 ppid=4429 pid=4774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.916000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:26.916000 audit: BPF prog-id=228 op=LOAD Jan 24 00:57:26.916000 audit[4774]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd6055abf0 a2=94 a3=6 items=0 ppid=4429 pid=4774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.916000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:26.916000 audit: BPF prog-id=228 op=UNLOAD Jan 24 00:57:26.916000 audit[4774]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd6055abf0 a2=0 a3=6 items=0 ppid=4429 pid=4774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.916000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:26.916000 audit: BPF prog-id=229 op=LOAD Jan 24 00:57:26.916000 audit[4774]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd6055a3a0 a2=94 a3=88 items=0 ppid=4429 pid=4774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.916000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:26.916000 audit: BPF prog-id=230 op=LOAD Jan 24 00:57:26.916000 audit[4774]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd6055a220 a2=94 a3=2 items=0 ppid=4429 pid=4774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.916000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:26.921000 audit: BPF prog-id=230 op=UNLOAD Jan 24 00:57:26.921000 audit[4774]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd6055a250 a2=0 a3=7ffd6055a350 items=0 ppid=4429 pid=4774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.921000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:26.921000 audit: BPF prog-id=229 op=UNLOAD Jan 24 00:57:26.921000 audit[4774]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2aa38d10 a2=0 a3=e24ba00ffef95d2a items=0 ppid=4429 pid=4774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:26.921000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:27.046000 audit: BPF prog-id=221 op=UNLOAD Jan 24 00:57:27.046000 audit[4429]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0012c8140 a2=0 a3=0 items=0 ppid=4415 pid=4429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:27.046000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 24 00:57:27.364559 systemd-networkd[1507]: vxlan.calico: Gained IPv6LL Jan 24 00:57:27.410000 audit[4805]: NETFILTER_CFG table=mangle:133 family=2 entries=16 op=nft_register_chain pid=4805 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:57:27.410000 audit[4805]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffc97a9af20 a2=0 a3=7ffc97a9af0c items=0 ppid=4429 pid=4805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:27.410000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:57:27.413000 audit[4801]: NETFILTER_CFG table=nat:134 family=2 entries=15 op=nft_register_chain pid=4801 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:57:27.413000 audit[4801]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffea247f2f0 a2=0 a3=7ffea247f2dc items=0 ppid=4429 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:27.413000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:57:27.440000 audit[4799]: NETFILTER_CFG table=raw:135 family=2 entries=21 op=nft_register_chain pid=4799 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:57:27.440000 audit[4799]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fffba453670 a2=0 a3=7fffba45365c items=0 ppid=4429 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:27.440000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:57:27.535786 kubelet[2883]: E0124 00:57:27.523263 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:57:27.452000 audit[4802]: NETFILTER_CFG table=filter:136 family=2 entries=192 op=nft_register_chain pid=4802 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:57:27.452000 audit[4802]: SYSCALL arch=c000003e syscall=46 success=yes exit=111724 a0=3 a1=7ffe77cb8b40 a2=0 a3=7ffe77cb8b2c items=0 ppid=4429 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:27.452000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:57:28.967944 kubelet[2883]: E0124 00:57:28.966662 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:30.983073 kubelet[2883]: E0124 00:57:30.982393 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:31.973913 containerd[1609]: time="2026-01-24T00:57:31.973025364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5985c58466-tkqwx,Uid:5a9025b1-4c1d-4d71-8add-e1566c4e04cc,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:57:31.986390 containerd[1609]: time="2026-01-24T00:57:31.986343799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5985c58466-q852p,Uid:6f07ac71-f9bf-4f16-8022-eeee9f625fbd,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:57:32.934170 systemd-networkd[1507]: cali6f224585e4e: Link UP Jan 24 00:57:32.934594 systemd-networkd[1507]: cali6f224585e4e: Gained carrier Jan 24 00:57:32.967954 kubelet[2883]: E0124 00:57:32.965323 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:32.967954 kubelet[2883]: E0124 00:57:32.966145 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:32.978886 containerd[1609]: time="2026-01-24T00:57:32.977245225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b9dc86db-sl4tg,Uid:70bde68b-f37d-4bad-bf48-1635753f011a,Namespace:calico-system,Attempt:0,}" Jan 24 00:57:32.982534 containerd[1609]: time="2026-01-24T00:57:32.982446036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9qkfx,Uid:2bf7f94a-619e-473a-b5dc-1f8159f51234,Namespace:kube-system,Attempt:0,}" Jan 24 00:57:33.046686 containerd[1609]: 2026-01-24 00:57:32.286 [INFO][4813] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5985c58466--tkqwx-eth0 calico-apiserver-5985c58466- calico-apiserver 5a9025b1-4c1d-4d71-8add-e1566c4e04cc 972 0 2026-01-24 00:56:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5985c58466 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5985c58466-tkqwx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6f224585e4e [] [] }} ContainerID="2ec859b026339ba4cce5964a3ce533906310fb356d029f922b8d13b151857a6e" Namespace="calico-apiserver" Pod="calico-apiserver-5985c58466-tkqwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5985c58466--tkqwx-" Jan 24 00:57:33.046686 containerd[1609]: 2026-01-24 00:57:32.287 [INFO][4813] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ec859b026339ba4cce5964a3ce533906310fb356d029f922b8d13b151857a6e" Namespace="calico-apiserver" Pod="calico-apiserver-5985c58466-tkqwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5985c58466--tkqwx-eth0" Jan 24 00:57:33.046686 containerd[1609]: 2026-01-24 00:57:32.562 [INFO][4842] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ec859b026339ba4cce5964a3ce533906310fb356d029f922b8d13b151857a6e" HandleID="k8s-pod-network.2ec859b026339ba4cce5964a3ce533906310fb356d029f922b8d13b151857a6e" Workload="localhost-k8s-calico--apiserver--5985c58466--tkqwx-eth0" Jan 24 00:57:33.046686 containerd[1609]: 2026-01-24 00:57:32.564 [INFO][4842] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2ec859b026339ba4cce5964a3ce533906310fb356d029f922b8d13b151857a6e" HandleID="k8s-pod-network.2ec859b026339ba4cce5964a3ce533906310fb356d029f922b8d13b151857a6e" Workload="localhost-k8s-calico--apiserver--5985c58466--tkqwx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000405bb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5985c58466-tkqwx", "timestamp":"2026-01-24 00:57:32.562264655 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:33.046686 containerd[1609]: 2026-01-24 00:57:32.565 [INFO][4842] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:33.046686 containerd[1609]: 2026-01-24 00:57:32.565 [INFO][4842] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:33.046686 containerd[1609]: 2026-01-24 00:57:32.565 [INFO][4842] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:57:33.046686 containerd[1609]: 2026-01-24 00:57:32.621 [INFO][4842] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2ec859b026339ba4cce5964a3ce533906310fb356d029f922b8d13b151857a6e" host="localhost" Jan 24 00:57:33.046686 containerd[1609]: 2026-01-24 00:57:32.705 [INFO][4842] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:57:33.046686 containerd[1609]: 2026-01-24 00:57:32.761 [INFO][4842] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:57:33.046686 containerd[1609]: 2026-01-24 00:57:32.787 [INFO][4842] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:33.046686 containerd[1609]: 2026-01-24 00:57:32.807 [INFO][4842] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:33.046686 containerd[1609]: 2026-01-24 00:57:32.807 [INFO][4842] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2ec859b026339ba4cce5964a3ce533906310fb356d029f922b8d13b151857a6e" host="localhost" Jan 24 00:57:33.046686 containerd[1609]: 2026-01-24 00:57:32.813 [INFO][4842] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2ec859b026339ba4cce5964a3ce533906310fb356d029f922b8d13b151857a6e Jan 24 00:57:33.046686 containerd[1609]: 2026-01-24 00:57:32.858 [INFO][4842] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2ec859b026339ba4cce5964a3ce533906310fb356d029f922b8d13b151857a6e" host="localhost" Jan 24 00:57:33.046686 containerd[1609]: 2026-01-24 00:57:32.893 [INFO][4842] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.2ec859b026339ba4cce5964a3ce533906310fb356d029f922b8d13b151857a6e" host="localhost" Jan 24 00:57:33.046686 containerd[1609]: 2026-01-24 00:57:32.893 [INFO][4842] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.2ec859b026339ba4cce5964a3ce533906310fb356d029f922b8d13b151857a6e" host="localhost" Jan 24 00:57:33.046686 containerd[1609]: 2026-01-24 00:57:32.893 [INFO][4842] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:33.046686 containerd[1609]: 2026-01-24 00:57:32.893 [INFO][4842] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="2ec859b026339ba4cce5964a3ce533906310fb356d029f922b8d13b151857a6e" HandleID="k8s-pod-network.2ec859b026339ba4cce5964a3ce533906310fb356d029f922b8d13b151857a6e" Workload="localhost-k8s-calico--apiserver--5985c58466--tkqwx-eth0" Jan 24 00:57:33.053972 containerd[1609]: 2026-01-24 00:57:32.912 [INFO][4813] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ec859b026339ba4cce5964a3ce533906310fb356d029f922b8d13b151857a6e" Namespace="calico-apiserver" Pod="calico-apiserver-5985c58466-tkqwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5985c58466--tkqwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5985c58466--tkqwx-eth0", GenerateName:"calico-apiserver-5985c58466-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a9025b1-4c1d-4d71-8add-e1566c4e04cc", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5985c58466", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5985c58466-tkqwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f224585e4e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:33.053972 containerd[1609]: 2026-01-24 00:57:32.916 [INFO][4813] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="2ec859b026339ba4cce5964a3ce533906310fb356d029f922b8d13b151857a6e" Namespace="calico-apiserver" Pod="calico-apiserver-5985c58466-tkqwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5985c58466--tkqwx-eth0" Jan 24 00:57:33.053972 containerd[1609]: 2026-01-24 00:57:32.916 [INFO][4813] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f224585e4e ContainerID="2ec859b026339ba4cce5964a3ce533906310fb356d029f922b8d13b151857a6e" Namespace="calico-apiserver" Pod="calico-apiserver-5985c58466-tkqwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5985c58466--tkqwx-eth0" Jan 24 00:57:33.053972 containerd[1609]: 2026-01-24 00:57:32.948 [INFO][4813] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ec859b026339ba4cce5964a3ce533906310fb356d029f922b8d13b151857a6e" Namespace="calico-apiserver" Pod="calico-apiserver-5985c58466-tkqwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5985c58466--tkqwx-eth0" Jan 24 00:57:33.053972 containerd[1609]: 2026-01-24 00:57:32.949 [INFO][4813] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ec859b026339ba4cce5964a3ce533906310fb356d029f922b8d13b151857a6e" Namespace="calico-apiserver" Pod="calico-apiserver-5985c58466-tkqwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5985c58466--tkqwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5985c58466--tkqwx-eth0", GenerateName:"calico-apiserver-5985c58466-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a9025b1-4c1d-4d71-8add-e1566c4e04cc", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5985c58466", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ec859b026339ba4cce5964a3ce533906310fb356d029f922b8d13b151857a6e", Pod:"calico-apiserver-5985c58466-tkqwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f224585e4e", MAC:"8a:06:04:9e:5c:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:33.053972 containerd[1609]: 2026-01-24 00:57:33.008 [INFO][4813] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ec859b026339ba4cce5964a3ce533906310fb356d029f922b8d13b151857a6e" Namespace="calico-apiserver" Pod="calico-apiserver-5985c58466-tkqwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5985c58466--tkqwx-eth0" Jan 24 00:57:33.199132 systemd-networkd[1507]: calif960a3a4b2a: Link UP Jan 24 00:57:33.202175 systemd-networkd[1507]: calif960a3a4b2a: Gained carrier Jan 24 00:57:33.213000 audit[4903]: NETFILTER_CFG table=filter:137 family=2 entries=62 op=nft_register_chain pid=4903 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:57:33.219855 kernel: kauditd_printk_skb: 47 callbacks suppressed Jan 24 00:57:33.219977 kernel: audit: type=1325 audit(1769216253.213:706): table=filter:137 family=2 entries=62 op=nft_register_chain pid=4903 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:57:33.266182 kernel: audit: type=1300 audit(1769216253.213:706): arch=c000003e syscall=46 success=yes exit=31772 a0=3 a1=7ffd0e8a7ca0 a2=0 a3=7ffd0e8a7c8c items=0 ppid=4429 pid=4903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:33.213000 audit[4903]: SYSCALL arch=c000003e syscall=46 success=yes exit=31772 a0=3 a1=7ffd0e8a7ca0 a2=0 a3=7ffd0e8a7c8c items=0 ppid=4429 pid=4903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:33.213000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:57:33.283454 containerd[1609]: 2026-01-24 00:57:32.373 [INFO][4815] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5985c58466--q852p-eth0 calico-apiserver-5985c58466- calico-apiserver 6f07ac71-f9bf-4f16-8022-eeee9f625fbd 974 0 2026-01-24 00:56:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5985c58466 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5985c58466-q852p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif960a3a4b2a [] [] }} ContainerID="7e29d4e96077f56067e3bc05334cc7ae4c322e60eb0796514a1c5892d848ea71" Namespace="calico-apiserver" Pod="calico-apiserver-5985c58466-q852p" WorkloadEndpoint="localhost-k8s-calico--apiserver--5985c58466--q852p-" Jan 24 00:57:33.283454 containerd[1609]: 2026-01-24 00:57:32.373 [INFO][4815] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e29d4e96077f56067e3bc05334cc7ae4c322e60eb0796514a1c5892d848ea71" Namespace="calico-apiserver" Pod="calico-apiserver-5985c58466-q852p" WorkloadEndpoint="localhost-k8s-calico--apiserver--5985c58466--q852p-eth0" Jan 24 00:57:33.283454 containerd[1609]: 2026-01-24 00:57:32.780 [INFO][4850] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e29d4e96077f56067e3bc05334cc7ae4c322e60eb0796514a1c5892d848ea71" HandleID="k8s-pod-network.7e29d4e96077f56067e3bc05334cc7ae4c322e60eb0796514a1c5892d848ea71" Workload="localhost-k8s-calico--apiserver--5985c58466--q852p-eth0" Jan 24 00:57:33.283454 containerd[1609]: 2026-01-24 00:57:32.781 [INFO][4850] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7e29d4e96077f56067e3bc05334cc7ae4c322e60eb0796514a1c5892d848ea71" HandleID="k8s-pod-network.7e29d4e96077f56067e3bc05334cc7ae4c322e60eb0796514a1c5892d848ea71" Workload="localhost-k8s-calico--apiserver--5985c58466--q852p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011f300), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5985c58466-q852p", "timestamp":"2026-01-24 00:57:32.780814855 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:33.283454 containerd[1609]: 2026-01-24 00:57:32.781 [INFO][4850] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:33.283454 containerd[1609]: 2026-01-24 00:57:32.897 [INFO][4850] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:33.283454 containerd[1609]: 2026-01-24 00:57:32.897 [INFO][4850] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:57:33.283454 containerd[1609]: 2026-01-24 00:57:32.956 [INFO][4850] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7e29d4e96077f56067e3bc05334cc7ae4c322e60eb0796514a1c5892d848ea71" host="localhost" Jan 24 00:57:33.283454 containerd[1609]: 2026-01-24 00:57:33.024 [INFO][4850] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:57:33.283454 containerd[1609]: 2026-01-24 00:57:33.080 [INFO][4850] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:57:33.283454 containerd[1609]: 2026-01-24 00:57:33.098 [INFO][4850] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:33.283454 containerd[1609]: 2026-01-24 00:57:33.110 [INFO][4850] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:33.283454 containerd[1609]: 2026-01-24 00:57:33.110 [INFO][4850] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7e29d4e96077f56067e3bc05334cc7ae4c322e60eb0796514a1c5892d848ea71" host="localhost" Jan 24 00:57:33.283454 containerd[1609]: 2026-01-24 00:57:33.117 [INFO][4850] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7e29d4e96077f56067e3bc05334cc7ae4c322e60eb0796514a1c5892d848ea71 Jan 24 00:57:33.283454 containerd[1609]: 2026-01-24 00:57:33.151 [INFO][4850] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7e29d4e96077f56067e3bc05334cc7ae4c322e60eb0796514a1c5892d848ea71" host="localhost" Jan 24 00:57:33.283454 containerd[1609]: 2026-01-24 00:57:33.170 [INFO][4850] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7e29d4e96077f56067e3bc05334cc7ae4c322e60eb0796514a1c5892d848ea71" host="localhost" Jan 24 00:57:33.283454 containerd[1609]: 2026-01-24 00:57:33.170 [INFO][4850] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7e29d4e96077f56067e3bc05334cc7ae4c322e60eb0796514a1c5892d848ea71" host="localhost" Jan 24 00:57:33.283454 containerd[1609]: 2026-01-24 00:57:33.170 [INFO][4850] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:33.283454 containerd[1609]: 2026-01-24 00:57:33.170 [INFO][4850] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7e29d4e96077f56067e3bc05334cc7ae4c322e60eb0796514a1c5892d848ea71" HandleID="k8s-pod-network.7e29d4e96077f56067e3bc05334cc7ae4c322e60eb0796514a1c5892d848ea71" Workload="localhost-k8s-calico--apiserver--5985c58466--q852p-eth0" Jan 24 00:57:33.284502 kernel: audit: type=1327 audit(1769216253.213:706): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:57:33.284562 containerd[1609]: 2026-01-24 00:57:33.186 [INFO][4815] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e29d4e96077f56067e3bc05334cc7ae4c322e60eb0796514a1c5892d848ea71" Namespace="calico-apiserver" Pod="calico-apiserver-5985c58466-q852p" WorkloadEndpoint="localhost-k8s-calico--apiserver--5985c58466--q852p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5985c58466--q852p-eth0", GenerateName:"calico-apiserver-5985c58466-", Namespace:"calico-apiserver", SelfLink:"", UID:"6f07ac71-f9bf-4f16-8022-eeee9f625fbd", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5985c58466", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5985c58466-q852p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif960a3a4b2a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:33.284562 containerd[1609]: 2026-01-24 00:57:33.187 [INFO][4815] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7e29d4e96077f56067e3bc05334cc7ae4c322e60eb0796514a1c5892d848ea71" Namespace="calico-apiserver" Pod="calico-apiserver-5985c58466-q852p" WorkloadEndpoint="localhost-k8s-calico--apiserver--5985c58466--q852p-eth0" Jan 24 00:57:33.284562 containerd[1609]: 2026-01-24 00:57:33.187 [INFO][4815] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif960a3a4b2a ContainerID="7e29d4e96077f56067e3bc05334cc7ae4c322e60eb0796514a1c5892d848ea71" Namespace="calico-apiserver" Pod="calico-apiserver-5985c58466-q852p" WorkloadEndpoint="localhost-k8s-calico--apiserver--5985c58466--q852p-eth0" Jan 24 00:57:33.284562 containerd[1609]: 2026-01-24 00:57:33.197 [INFO][4815] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e29d4e96077f56067e3bc05334cc7ae4c322e60eb0796514a1c5892d848ea71" Namespace="calico-apiserver" Pod="calico-apiserver-5985c58466-q852p" WorkloadEndpoint="localhost-k8s-calico--apiserver--5985c58466--q852p-eth0" Jan 24 00:57:33.284562 containerd[1609]: 2026-01-24 00:57:33.197 [INFO][4815] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e29d4e96077f56067e3bc05334cc7ae4c322e60eb0796514a1c5892d848ea71" Namespace="calico-apiserver" Pod="calico-apiserver-5985c58466-q852p" WorkloadEndpoint="localhost-k8s-calico--apiserver--5985c58466--q852p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5985c58466--q852p-eth0", GenerateName:"calico-apiserver-5985c58466-", Namespace:"calico-apiserver", SelfLink:"", UID:"6f07ac71-f9bf-4f16-8022-eeee9f625fbd", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5985c58466", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e29d4e96077f56067e3bc05334cc7ae4c322e60eb0796514a1c5892d848ea71", Pod:"calico-apiserver-5985c58466-q852p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif960a3a4b2a", MAC:"92:49:92:af:cf:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:33.284562 containerd[1609]: 2026-01-24 00:57:33.237 [INFO][4815] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e29d4e96077f56067e3bc05334cc7ae4c322e60eb0796514a1c5892d848ea71" Namespace="calico-apiserver" Pod="calico-apiserver-5985c58466-q852p" WorkloadEndpoint="localhost-k8s-calico--apiserver--5985c58466--q852p-eth0" Jan 24 00:57:33.360043 containerd[1609]: time="2026-01-24T00:57:33.356575670Z" level=info msg="connecting to shim 2ec859b026339ba4cce5964a3ce533906310fb356d029f922b8d13b151857a6e" address="unix:///run/containerd/s/a5686746ea62d5a3e4a9cbe4f5aa1b053e49bf36ada6e9e8acb4c20beaf5836f" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:57:33.430000 audit[4936]: NETFILTER_CFG table=filter:138 family=2 entries=53 op=nft_register_chain pid=4936 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:57:33.447788 kernel: audit: type=1325 audit(1769216253.430:707): table=filter:138 family=2 entries=53 op=nft_register_chain pid=4936 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:57:33.430000 audit[4936]: SYSCALL arch=c000003e syscall=46 success=yes exit=26640 a0=3 a1=7fff10e6af80 a2=0 a3=7fff10e6af6c items=0 ppid=4429 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:33.430000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:57:33.504477 kernel: audit: type=1300 audit(1769216253.430:707): arch=c000003e syscall=46 success=yes exit=26640 a0=3 a1=7fff10e6af80 a2=0 a3=7fff10e6af6c items=0 ppid=4429 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:33.507892 kernel: audit: type=1327 audit(1769216253.430:707): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:57:33.544438 containerd[1609]: time="2026-01-24T00:57:33.544346308Z" level=info msg="connecting to shim 7e29d4e96077f56067e3bc05334cc7ae4c322e60eb0796514a1c5892d848ea71" address="unix:///run/containerd/s/0501c063c4fcb946814813448953d2d6c800fb9ee0a2bcb3cef60eb29cd56122" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:57:33.579253 systemd[1]: Started cri-containerd-2ec859b026339ba4cce5964a3ce533906310fb356d029f922b8d13b151857a6e.scope - libcontainer container 2ec859b026339ba4cce5964a3ce533906310fb356d029f922b8d13b151857a6e. Jan 24 00:57:33.837464 systemd[1]: Started cri-containerd-7e29d4e96077f56067e3bc05334cc7ae4c322e60eb0796514a1c5892d848ea71.scope - libcontainer container 7e29d4e96077f56067e3bc05334cc7ae4c322e60eb0796514a1c5892d848ea71. Jan 24 00:57:33.860386 kernel: audit: type=1334 audit(1769216253.850:708): prog-id=231 op=LOAD Jan 24 00:57:33.850000 audit: BPF prog-id=231 op=LOAD Jan 24 00:57:33.859000 audit: BPF prog-id=232 op=LOAD Jan 24 00:57:33.859000 audit[4941]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4923 pid=4941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:33.872102 systemd-networkd[1507]: calidd0e4dc3efb: Link UP Jan 24 00:57:33.899158 systemd-networkd[1507]: calidd0e4dc3efb: Gained carrier Jan 24 00:57:33.904197 kernel: audit: type=1334 audit(1769216253.859:709): prog-id=232 op=LOAD Jan 24 00:57:33.906022 kernel: audit: type=1300 audit(1769216253.859:709): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4923 pid=4941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:33.907288 kernel: audit: type=1327 audit(1769216253.859:709): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265633835396230323633333962613463636535393634613363653533 Jan 24 00:57:33.859000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265633835396230323633333962613463636535393634613363653533 Jan 24 00:57:33.915929 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:57:33.859000 audit: BPF prog-id=232 op=UNLOAD Jan 24 00:57:33.859000 audit[4941]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4923 pid=4941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:33.859000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265633835396230323633333962613463636535393634613363653533 Jan 24 00:57:33.859000 audit: BPF prog-id=233 op=LOAD Jan 24 00:57:33.859000 audit[4941]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4923 pid=4941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:33.859000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265633835396230323633333962613463636535393634613363653533 Jan 24 00:57:33.859000 audit: BPF prog-id=234 op=LOAD Jan 24 00:57:33.859000 audit[4941]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4923 pid=4941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:33.859000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265633835396230323633333962613463636535393634613363653533 Jan 24 00:57:33.859000 audit: BPF prog-id=234 op=UNLOAD Jan 24 00:57:33.859000 audit[4941]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4923 pid=4941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:33.859000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265633835396230323633333962613463636535393634613363653533 Jan 24 00:57:33.859000 audit: BPF prog-id=233 op=UNLOAD Jan 24 00:57:33.859000 audit[4941]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4923 pid=4941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:33.859000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265633835396230323633333962613463636535393634613363653533 Jan 24 00:57:33.860000 audit: BPF prog-id=235 op=LOAD Jan 24 00:57:33.860000 audit[4941]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4923 pid=4941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:33.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265633835396230323633333962613463636535393634613363653533 Jan 24 00:57:34.016257 containerd[1609]: 2026-01-24 00:57:33.317 [INFO][4874] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5b9dc86db--sl4tg-eth0 calico-kube-controllers-5b9dc86db- calico-system 70bde68b-f37d-4bad-bf48-1635753f011a 977 0 2026-01-24 00:56:42 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5b9dc86db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5b9dc86db-sl4tg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidd0e4dc3efb [] [] }} ContainerID="e26a3234a97a66778be850be8a109e8d12e1e7d00ba70ac4bf2d35e45128b11d" Namespace="calico-system" Pod="calico-kube-controllers-5b9dc86db-sl4tg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b9dc86db--sl4tg-" Jan 24 00:57:34.016257 containerd[1609]: 2026-01-24 00:57:33.317 [INFO][4874] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e26a3234a97a66778be850be8a109e8d12e1e7d00ba70ac4bf2d35e45128b11d" Namespace="calico-system" Pod="calico-kube-controllers-5b9dc86db-sl4tg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b9dc86db--sl4tg-eth0" Jan 24 00:57:34.016257 containerd[1609]: 2026-01-24 00:57:33.557 [INFO][4926] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e26a3234a97a66778be850be8a109e8d12e1e7d00ba70ac4bf2d35e45128b11d" HandleID="k8s-pod-network.e26a3234a97a66778be850be8a109e8d12e1e7d00ba70ac4bf2d35e45128b11d" Workload="localhost-k8s-calico--kube--controllers--5b9dc86db--sl4tg-eth0" Jan 24 00:57:34.016257 containerd[1609]: 2026-01-24 00:57:33.558 [INFO][4926] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e26a3234a97a66778be850be8a109e8d12e1e7d00ba70ac4bf2d35e45128b11d" HandleID="k8s-pod-network.e26a3234a97a66778be850be8a109e8d12e1e7d00ba70ac4bf2d35e45128b11d" Workload="localhost-k8s-calico--kube--controllers--5b9dc86db--sl4tg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fee0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5b9dc86db-sl4tg", "timestamp":"2026-01-24 00:57:33.557464015 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:34.016257 containerd[1609]: 2026-01-24 00:57:33.558 [INFO][4926] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:34.016257 containerd[1609]: 2026-01-24 00:57:33.558 [INFO][4926] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:34.016257 containerd[1609]: 2026-01-24 00:57:33.558 [INFO][4926] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:57:34.016257 containerd[1609]: 2026-01-24 00:57:33.595 [INFO][4926] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e26a3234a97a66778be850be8a109e8d12e1e7d00ba70ac4bf2d35e45128b11d" host="localhost" Jan 24 00:57:34.016257 containerd[1609]: 2026-01-24 00:57:33.629 [INFO][4926] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:57:34.016257 containerd[1609]: 2026-01-24 00:57:33.702 [INFO][4926] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:57:34.016257 containerd[1609]: 2026-01-24 00:57:33.737 [INFO][4926] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:34.016257 containerd[1609]: 2026-01-24 00:57:33.750 [INFO][4926] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:34.016257 containerd[1609]: 2026-01-24 00:57:33.751 [INFO][4926] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e26a3234a97a66778be850be8a109e8d12e1e7d00ba70ac4bf2d35e45128b11d" host="localhost" Jan 24 00:57:34.016257 containerd[1609]: 2026-01-24 00:57:33.781 [INFO][4926] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e26a3234a97a66778be850be8a109e8d12e1e7d00ba70ac4bf2d35e45128b11d Jan 24 00:57:34.016257 containerd[1609]: 2026-01-24 00:57:33.810 [INFO][4926] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e26a3234a97a66778be850be8a109e8d12e1e7d00ba70ac4bf2d35e45128b11d" host="localhost" Jan 24 00:57:34.016257 containerd[1609]: 2026-01-24 00:57:33.853 [INFO][4926] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.e26a3234a97a66778be850be8a109e8d12e1e7d00ba70ac4bf2d35e45128b11d" host="localhost" Jan 24 00:57:34.016257 containerd[1609]: 2026-01-24 00:57:33.853 [INFO][4926] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.e26a3234a97a66778be850be8a109e8d12e1e7d00ba70ac4bf2d35e45128b11d" host="localhost" Jan 24 00:57:34.016257 containerd[1609]: 2026-01-24 00:57:33.853 [INFO][4926] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:34.016257 containerd[1609]: 2026-01-24 00:57:33.853 [INFO][4926] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="e26a3234a97a66778be850be8a109e8d12e1e7d00ba70ac4bf2d35e45128b11d" HandleID="k8s-pod-network.e26a3234a97a66778be850be8a109e8d12e1e7d00ba70ac4bf2d35e45128b11d" Workload="localhost-k8s-calico--kube--controllers--5b9dc86db--sl4tg-eth0" Jan 24 00:57:34.019350 containerd[1609]: 2026-01-24 00:57:33.858 [INFO][4874] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e26a3234a97a66778be850be8a109e8d12e1e7d00ba70ac4bf2d35e45128b11d" Namespace="calico-system" Pod="calico-kube-controllers-5b9dc86db-sl4tg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b9dc86db--sl4tg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b9dc86db--sl4tg-eth0", GenerateName:"calico-kube-controllers-5b9dc86db-", Namespace:"calico-system", SelfLink:"", UID:"70bde68b-f37d-4bad-bf48-1635753f011a", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b9dc86db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5b9dc86db-sl4tg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidd0e4dc3efb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:34.019350 containerd[1609]: 2026-01-24 00:57:33.858 [INFO][4874] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="e26a3234a97a66778be850be8a109e8d12e1e7d00ba70ac4bf2d35e45128b11d" Namespace="calico-system" Pod="calico-kube-controllers-5b9dc86db-sl4tg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b9dc86db--sl4tg-eth0" Jan 24 00:57:34.019350 containerd[1609]: 2026-01-24 00:57:33.858 [INFO][4874] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd0e4dc3efb ContainerID="e26a3234a97a66778be850be8a109e8d12e1e7d00ba70ac4bf2d35e45128b11d" Namespace="calico-system" Pod="calico-kube-controllers-5b9dc86db-sl4tg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b9dc86db--sl4tg-eth0" Jan 24 00:57:34.019350 containerd[1609]: 2026-01-24 00:57:33.907 [INFO][4874] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e26a3234a97a66778be850be8a109e8d12e1e7d00ba70ac4bf2d35e45128b11d" Namespace="calico-system" Pod="calico-kube-controllers-5b9dc86db-sl4tg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b9dc86db--sl4tg-eth0" Jan 24 00:57:34.019350 containerd[1609]: 2026-01-24 00:57:33.912 [INFO][4874] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e26a3234a97a66778be850be8a109e8d12e1e7d00ba70ac4bf2d35e45128b11d" Namespace="calico-system" Pod="calico-kube-controllers-5b9dc86db-sl4tg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b9dc86db--sl4tg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b9dc86db--sl4tg-eth0", GenerateName:"calico-kube-controllers-5b9dc86db-", Namespace:"calico-system", SelfLink:"", UID:"70bde68b-f37d-4bad-bf48-1635753f011a", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b9dc86db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e26a3234a97a66778be850be8a109e8d12e1e7d00ba70ac4bf2d35e45128b11d", Pod:"calico-kube-controllers-5b9dc86db-sl4tg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidd0e4dc3efb", MAC:"a2:8d:f7:98:f2:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:34.019350 containerd[1609]: 2026-01-24 00:57:33.985 [INFO][4874] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e26a3234a97a66778be850be8a109e8d12e1e7d00ba70ac4bf2d35e45128b11d" Namespace="calico-system" Pod="calico-kube-controllers-5b9dc86db-sl4tg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b9dc86db--sl4tg-eth0" Jan 24 00:57:34.095000 audit[5020]: NETFILTER_CFG table=filter:139 family=2 entries=62 op=nft_register_chain pid=5020 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:57:34.098000 audit: BPF prog-id=236 op=LOAD Jan 24 00:57:34.095000 audit[5020]: SYSCALL arch=c000003e syscall=46 success=yes exit=28368 a0=3 a1=7ffdf92b6c60 a2=0 a3=7ffdf92b6c4c items=0 ppid=4429 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:34.095000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:57:34.114000 audit: BPF prog-id=237 op=LOAD Jan 24 00:57:34.114000 audit[4984]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4962 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:34.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765323964346539363037376635363036376533626330353333346363 Jan 24 00:57:34.115000 audit: BPF prog-id=237 op=UNLOAD Jan 24 00:57:34.115000 audit[4984]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4962 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:34.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765323964346539363037376635363036376533626330353333346363 Jan 24 00:57:34.126000 audit: BPF prog-id=238 op=LOAD Jan 24 00:57:34.126000 audit[4984]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4962 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:34.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765323964346539363037376635363036376533626330353333346363 Jan 24 00:57:34.126000 audit: BPF prog-id=239 op=LOAD Jan 24 00:57:34.126000 audit[4984]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4962 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:34.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765323964346539363037376635363036376533626330353333346363 Jan 24 00:57:34.126000 audit: BPF prog-id=239 op=UNLOAD Jan 24 00:57:34.126000 audit[4984]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4962 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:34.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765323964346539363037376635363036376533626330353333346363 Jan 24 00:57:34.126000 audit: BPF prog-id=238 op=UNLOAD Jan 24 00:57:34.126000 audit[4984]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4962 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:34.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765323964346539363037376635363036376533626330353333346363 Jan 24 00:57:34.152840 systemd-networkd[1507]: cali6f224585e4e: Gained IPv6LL Jan 24 00:57:34.126000 audit: BPF prog-id=240 op=LOAD Jan 24 00:57:34.126000 audit[4984]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4962 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:34.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765323964346539363037376635363036376533626330353333346363 Jan 24 00:57:34.166852 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:57:34.354399 systemd-networkd[1507]: calif367218c734: Link UP Jan 24 00:57:34.389294 systemd-networkd[1507]: calif367218c734: Gained carrier Jan 24 00:57:34.488905 containerd[1609]: time="2026-01-24T00:57:34.487298331Z" level=info msg="connecting to shim e26a3234a97a66778be850be8a109e8d12e1e7d00ba70ac4bf2d35e45128b11d" address="unix:///run/containerd/s/47cec827e794fd55509536d0c4e8ba31834faa32deae1afa91b165a7edb8a9b3" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:57:34.514536 containerd[1609]: time="2026-01-24T00:57:34.514491417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5985c58466-tkqwx,Uid:5a9025b1-4c1d-4d71-8add-e1566c4e04cc,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2ec859b026339ba4cce5964a3ce533906310fb356d029f922b8d13b151857a6e\"" Jan 24 00:57:34.552887 containerd[1609]: 2026-01-24 00:57:33.638 [INFO][4886] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--9qkfx-eth0 coredns-674b8bbfcf- kube-system 2bf7f94a-619e-473a-b5dc-1f8159f51234 978 0 2026-01-24 00:56:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-9qkfx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif367218c734 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a" Namespace="kube-system" Pod="coredns-674b8bbfcf-9qkfx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9qkfx-" Jan 24 00:57:34.552887 containerd[1609]: 2026-01-24 00:57:33.639 [INFO][4886] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a" Namespace="kube-system" Pod="coredns-674b8bbfcf-9qkfx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9qkfx-eth0" Jan 24 00:57:34.552887 containerd[1609]: 2026-01-24 00:57:33.813 [INFO][4970] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a" HandleID="k8s-pod-network.9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a" Workload="localhost-k8s-coredns--674b8bbfcf--9qkfx-eth0" Jan 24 00:57:34.552887 containerd[1609]: 2026-01-24 00:57:33.825 [INFO][4970] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a" HandleID="k8s-pod-network.9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a" Workload="localhost-k8s-coredns--674b8bbfcf--9qkfx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c61a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-9qkfx", "timestamp":"2026-01-24 00:57:33.813386144 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:34.552887 containerd[1609]: 2026-01-24 00:57:33.826 [INFO][4970] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:34.552887 containerd[1609]: 2026-01-24 00:57:33.853 [INFO][4970] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:34.552887 containerd[1609]: 2026-01-24 00:57:33.853 [INFO][4970] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:57:34.552887 containerd[1609]: 2026-01-24 00:57:33.887 [INFO][4970] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a" host="localhost" Jan 24 00:57:34.552887 containerd[1609]: 2026-01-24 00:57:33.910 [INFO][4970] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:57:34.552887 containerd[1609]: 2026-01-24 00:57:33.966 [INFO][4970] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:57:34.552887 containerd[1609]: 2026-01-24 00:57:33.978 [INFO][4970] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:34.552887 containerd[1609]: 2026-01-24 00:57:34.008 [INFO][4970] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:34.552887 containerd[1609]: 2026-01-24 00:57:34.009 [INFO][4970] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a" host="localhost" Jan 24 00:57:34.552887 containerd[1609]: 2026-01-24 00:57:34.019 [INFO][4970] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a Jan 24 00:57:34.552887 containerd[1609]: 2026-01-24 00:57:34.055 [INFO][4970] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a" host="localhost" Jan 24 00:57:34.552887 containerd[1609]: 2026-01-24 00:57:34.095 [INFO][4970] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a" host="localhost" Jan 24 00:57:34.552887 containerd[1609]: 2026-01-24 00:57:34.095 [INFO][4970] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a" host="localhost" Jan 24 00:57:34.552887 containerd[1609]: 2026-01-24 00:57:34.095 [INFO][4970] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:34.552887 containerd[1609]: 2026-01-24 00:57:34.095 [INFO][4970] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a" HandleID="k8s-pod-network.9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a" Workload="localhost-k8s-coredns--674b8bbfcf--9qkfx-eth0" Jan 24 00:57:34.554192 containerd[1609]: 2026-01-24 00:57:34.112 [INFO][4886] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a" Namespace="kube-system" Pod="coredns-674b8bbfcf-9qkfx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9qkfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--9qkfx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2bf7f94a-619e-473a-b5dc-1f8159f51234", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-9qkfx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif367218c734", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:34.554192 containerd[1609]: 2026-01-24 00:57:34.112 [INFO][4886] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a" Namespace="kube-system" Pod="coredns-674b8bbfcf-9qkfx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9qkfx-eth0" Jan 24 00:57:34.554192 containerd[1609]: 2026-01-24 00:57:34.112 [INFO][4886] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif367218c734 ContainerID="9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a" Namespace="kube-system" Pod="coredns-674b8bbfcf-9qkfx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9qkfx-eth0" Jan 24 00:57:34.554192 containerd[1609]: 2026-01-24 00:57:34.387 [INFO][4886] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a" Namespace="kube-system" Pod="coredns-674b8bbfcf-9qkfx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9qkfx-eth0" Jan 24 00:57:34.554192 containerd[1609]: 2026-01-24 00:57:34.418 [INFO][4886] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a" Namespace="kube-system" Pod="coredns-674b8bbfcf-9qkfx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9qkfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--9qkfx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2bf7f94a-619e-473a-b5dc-1f8159f51234", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a", Pod:"coredns-674b8bbfcf-9qkfx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif367218c734", MAC:"ba:da:79:fb:00:88", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:34.554192 containerd[1609]: 2026-01-24 00:57:34.510 [INFO][4886] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a" Namespace="kube-system" Pod="coredns-674b8bbfcf-9qkfx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9qkfx-eth0" Jan 24 00:57:34.578822 containerd[1609]: time="2026-01-24T00:57:34.565985085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:57:34.725327 containerd[1609]: time="2026-01-24T00:57:34.678161442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5985c58466-q852p,Uid:6f07ac71-f9bf-4f16-8022-eeee9f625fbd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7e29d4e96077f56067e3bc05334cc7ae4c322e60eb0796514a1c5892d848ea71\"" Jan 24 00:57:34.781985 systemd[1]: Started cri-containerd-e26a3234a97a66778be850be8a109e8d12e1e7d00ba70ac4bf2d35e45128b11d.scope - libcontainer container e26a3234a97a66778be850be8a109e8d12e1e7d00ba70ac4bf2d35e45128b11d. Jan 24 00:57:34.840980 containerd[1609]: time="2026-01-24T00:57:34.839260952Z" level=info msg="connecting to shim 9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a" address="unix:///run/containerd/s/58ea7d9f14033a45abbf4c58a020d81cb88bedf3c872e4ffcb3648e0921c116d" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:57:34.849814 containerd[1609]: time="2026-01-24T00:57:34.848182349Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:34.837000 audit[5080]: NETFILTER_CFG table=filter:140 family=2 entries=58 op=nft_register_chain pid=5080 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:57:34.837000 audit[5080]: SYSCALL arch=c000003e syscall=46 success=yes exit=26744 a0=3 a1=7ffc990036d0 a2=0 a3=7ffc990036bc items=0 ppid=4429 pid=5080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:34.837000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:57:34.859553 containerd[1609]: time="2026-01-24T00:57:34.859484980Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:57:34.859959 containerd[1609]: time="2026-01-24T00:57:34.859926523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:34.860918 kubelet[2883]: E0124 00:57:34.860809 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:34.861807 kubelet[2883]: E0124 00:57:34.861265 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:34.867255 containerd[1609]: time="2026-01-24T00:57:34.863852051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:57:34.867353 kubelet[2883]: E0124 00:57:34.867001 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f86n6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5985c58466-tkqwx_calico-apiserver(5a9025b1-4c1d-4d71-8add-e1566c4e04cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:34.868986 kubelet[2883]: E0124 00:57:34.868881 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 00:57:34.988163 systemd-networkd[1507]: calif960a3a4b2a: Gained IPv6LL Jan 24 00:57:34.992000 audit: BPF prog-id=241 op=LOAD Jan 24 00:57:34.999000 audit: BPF prog-id=242 op=LOAD Jan 24 00:57:34.999000 audit[5058]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5038 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:34.999000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532366133323334613937613636373738626538353062653861313039 Jan 24 00:57:34.999000 audit: BPF prog-id=242 op=UNLOAD Jan 24 00:57:34.999000 audit[5058]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5038 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:34.999000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532366133323334613937613636373738626538353062653861313039 Jan 24 00:57:34.999000 audit: BPF prog-id=243 op=LOAD Jan 24 00:57:34.999000 audit[5058]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5038 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:34.999000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532366133323334613937613636373738626538353062653861313039 Jan 24 00:57:35.002000 audit: BPF prog-id=244 op=LOAD Jan 24 00:57:35.002000 audit[5058]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=5038 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:35.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532366133323334613937613636373738626538353062653861313039 Jan 24 00:57:35.002000 audit: BPF prog-id=244 op=UNLOAD Jan 24 00:57:35.002000 audit[5058]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5038 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:35.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532366133323334613937613636373738626538353062653861313039 Jan 24 00:57:35.002000 audit: BPF prog-id=243 op=UNLOAD Jan 24 00:57:35.002000 audit[5058]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5038 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:35.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532366133323334613937613636373738626538353062653861313039 Jan 24 00:57:35.002000 audit: BPF prog-id=245 op=LOAD Jan 24 00:57:35.002000 audit[5058]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=5038 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:35.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532366133323334613937613636373738626538353062653861313039 Jan 24 00:57:35.014010 containerd[1609]: time="2026-01-24T00:57:34.993578509Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:35.015831 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:57:35.028194 containerd[1609]: time="2026-01-24T00:57:35.028142920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:35.029448 containerd[1609]: time="2026-01-24T00:57:35.029396414Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:57:35.050217 kubelet[2883]: E0124 00:57:35.047548 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:35.052662 kubelet[2883]: E0124 00:57:35.051094 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:35.052830 kubelet[2883]: E0124 00:57:35.052665 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s5w2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5985c58466-q852p_calico-apiserver(6f07ac71-f9bf-4f16-8022-eeee9f625fbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:35.055195 kubelet[2883]: E0124 00:57:35.055156 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 00:57:35.230307 systemd[1]: Started cri-containerd-9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a.scope - libcontainer container 9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a. Jan 24 00:57:35.318000 audit: BPF prog-id=246 op=LOAD Jan 24 00:57:35.319000 audit: BPF prog-id=247 op=LOAD Jan 24 00:57:35.319000 audit[5106]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5093 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:35.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936313461353534373838353463323234383136326630306636626133 Jan 24 00:57:35.319000 audit: BPF prog-id=247 op=UNLOAD Jan 24 00:57:35.319000 audit[5106]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5093 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:35.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936313461353534373838353463323234383136326630306636626133 Jan 24 00:57:35.320000 audit: BPF prog-id=248 op=LOAD Jan 24 00:57:35.320000 audit[5106]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5093 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:35.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936313461353534373838353463323234383136326630306636626133 Jan 24 00:57:35.320000 audit: BPF prog-id=249 op=LOAD Jan 24 00:57:35.320000 audit[5106]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5093 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:35.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936313461353534373838353463323234383136326630306636626133 Jan 24 00:57:35.320000 audit: BPF prog-id=249 op=UNLOAD Jan 24 00:57:35.320000 audit[5106]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5093 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:35.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936313461353534373838353463323234383136326630306636626133 Jan 24 00:57:35.320000 audit: BPF prog-id=248 op=UNLOAD Jan 24 00:57:35.320000 audit[5106]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5093 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:35.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936313461353534373838353463323234383136326630306636626133 Jan 24 00:57:35.321000 audit: BPF prog-id=250 op=LOAD Jan 24 00:57:35.321000 audit[5106]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5093 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:35.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936313461353534373838353463323234383136326630306636626133 Jan 24 00:57:35.325277 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:57:35.385411 containerd[1609]: time="2026-01-24T00:57:35.385191921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b9dc86db-sl4tg,Uid:70bde68b-f37d-4bad-bf48-1635753f011a,Namespace:calico-system,Attempt:0,} returns sandbox id \"e26a3234a97a66778be850be8a109e8d12e1e7d00ba70ac4bf2d35e45128b11d\"" Jan 24 00:57:35.397990 containerd[1609]: time="2026-01-24T00:57:35.397902789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:57:35.500689 containerd[1609]: time="2026-01-24T00:57:35.492796300Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:35.500689 containerd[1609]: time="2026-01-24T00:57:35.499332306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:35.500689 containerd[1609]: time="2026-01-24T00:57:35.499559710Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:57:35.500946 kubelet[2883]: E0124 00:57:35.500550 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:57:35.500946 kubelet[2883]: E0124 00:57:35.500663 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:57:35.503800 kubelet[2883]: E0124 00:57:35.503580 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8624g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5b9dc86db-sl4tg_calico-system(70bde68b-f37d-4bad-bf48-1635753f011a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:35.508526 kubelet[2883]: E0124 00:57:35.505916 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 00:57:35.560184 systemd-networkd[1507]: calif367218c734: Gained IPv6LL Jan 24 00:57:35.652961 containerd[1609]: time="2026-01-24T00:57:35.648313955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9qkfx,Uid:2bf7f94a-619e-473a-b5dc-1f8159f51234,Namespace:kube-system,Attempt:0,} returns sandbox id \"9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a\"" Jan 24 00:57:35.656594 kubelet[2883]: E0124 00:57:35.656505 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:35.709818 containerd[1609]: time="2026-01-24T00:57:35.704220979Z" level=info msg="CreateContainer within sandbox \"9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:57:35.746039 kubelet[2883]: E0124 00:57:35.745984 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 00:57:35.763425 kubelet[2883]: E0124 00:57:35.762548 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 00:57:35.775489 kubelet[2883]: E0124 00:57:35.775393 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 00:57:35.807013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount907741560.mount: Deactivated successfully. Jan 24 00:57:35.821077 containerd[1609]: time="2026-01-24T00:57:35.821001851Z" level=info msg="Container f47dfa8f72cace3ffb18b9db0a48d2ed53a29668fbbfd839b2461d16262c9b14: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:57:35.851164 containerd[1609]: time="2026-01-24T00:57:35.850296715Z" level=info msg="CreateContainer within sandbox \"9614a55478854c2248162f00f6ba3eb326680678bf384058767ed8fa0c655b8a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f47dfa8f72cace3ffb18b9db0a48d2ed53a29668fbbfd839b2461d16262c9b14\"" Jan 24 00:57:35.853817 containerd[1609]: time="2026-01-24T00:57:35.852667711Z" level=info msg="StartContainer for \"f47dfa8f72cace3ffb18b9db0a48d2ed53a29668fbbfd839b2461d16262c9b14\"" Jan 24 00:57:35.861666 containerd[1609]: time="2026-01-24T00:57:35.861495698Z" level=info msg="connecting to shim f47dfa8f72cace3ffb18b9db0a48d2ed53a29668fbbfd839b2461d16262c9b14" address="unix:///run/containerd/s/58ea7d9f14033a45abbf4c58a020d81cb88bedf3c872e4ffcb3648e0921c116d" protocol=ttrpc version=3 Jan 24 00:57:35.874016 systemd-networkd[1507]: calidd0e4dc3efb: Gained IPv6LL Jan 24 00:57:36.118227 containerd[1609]: time="2026-01-24T00:57:36.117983657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:57:36.147530 systemd[1]: Started cri-containerd-f47dfa8f72cace3ffb18b9db0a48d2ed53a29668fbbfd839b2461d16262c9b14.scope - libcontainer container f47dfa8f72cace3ffb18b9db0a48d2ed53a29668fbbfd839b2461d16262c9b14. Jan 24 00:57:36.193000 audit[5152]: NETFILTER_CFG table=filter:141 family=2 entries=14 op=nft_register_rule pid=5152 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:36.193000 audit[5152]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe930fb930 a2=0 a3=7ffe930fb91c items=0 ppid=3048 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:36.193000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:57:36.220000 audit: BPF prog-id=251 op=LOAD Jan 24 00:57:36.222000 audit: BPF prog-id=252 op=LOAD Jan 24 00:57:36.222000 audit[5140]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5093 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:36.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634376466613866373263616365336666623138623964623061343864 Jan 24 00:57:36.222000 audit: BPF prog-id=252 op=UNLOAD Jan 24 00:57:36.222000 audit[5140]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5093 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:36.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634376466613866373263616365336666623138623964623061343864 Jan 24 00:57:36.222000 audit: BPF prog-id=253 op=LOAD Jan 24 00:57:36.222000 audit[5140]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5093 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:36.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634376466613866373263616365336666623138623964623061343864 Jan 24 00:57:36.222000 audit: BPF prog-id=254 op=LOAD Jan 24 00:57:36.222000 audit[5140]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=5093 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:36.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634376466613866373263616365336666623138623964623061343864 Jan 24 00:57:36.222000 audit: BPF prog-id=254 op=UNLOAD Jan 24 00:57:36.222000 audit[5140]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5093 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:36.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634376466613866373263616365336666623138623964623061343864 Jan 24 00:57:36.222000 audit: BPF prog-id=253 op=UNLOAD Jan 24 00:57:36.222000 audit[5140]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5093 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:36.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634376466613866373263616365336666623138623964623061343864 Jan 24 00:57:36.222000 audit: BPF prog-id=255 op=LOAD Jan 24 00:57:36.222000 audit[5140]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=5093 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:36.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634376466613866373263616365336666623138623964623061343864 Jan 24 00:57:36.310932 containerd[1609]: time="2026-01-24T00:57:36.307370537Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:36.325377 containerd[1609]: time="2026-01-24T00:57:36.323249142Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:57:36.325377 containerd[1609]: time="2026-01-24T00:57:36.324260496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:36.373901 kubelet[2883]: E0124 00:57:36.324976 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:57:36.373901 kubelet[2883]: E0124 00:57:36.325227 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:57:36.373901 kubelet[2883]: E0124 00:57:36.354936 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lxh5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2256s_calico-system(9fdbb8ee-a6f4-499c-b584-8b75c3240604): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:36.373901 kubelet[2883]: E0124 00:57:36.359434 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 00:57:36.376000 audit[5152]: NETFILTER_CFG table=nat:142 family=2 entries=20 op=nft_register_rule pid=5152 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:36.376000 audit[5152]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe930fb930 a2=0 a3=7ffe930fb91c items=0 ppid=3048 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:36.376000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:57:36.481000 audit[5168]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:36.481000 audit[5168]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc910e3b10 a2=0 a3=7ffc910e3afc items=0 ppid=3048 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:36.481000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:57:36.494000 audit[5168]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=5168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:36.494000 audit[5168]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc910e3b10 a2=0 a3=7ffc910e3afc items=0 ppid=3048 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:36.494000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:57:36.500899 containerd[1609]: time="2026-01-24T00:57:36.500826007Z" level=info msg="StartContainer for \"f47dfa8f72cace3ffb18b9db0a48d2ed53a29668fbbfd839b2461d16262c9b14\" returns successfully" Jan 24 00:57:36.787250 kubelet[2883]: E0124 00:57:36.785221 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:36.790303 kubelet[2883]: E0124 00:57:36.790145 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 00:57:36.795672 kubelet[2883]: E0124 00:57:36.795546 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 00:57:36.914583 kubelet[2883]: I0124 00:57:36.914412 2883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9qkfx" podStartSLOduration=90.914389864 podStartE2EDuration="1m30.914389864s" podCreationTimestamp="2026-01-24 00:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:57:36.902504137 +0000 UTC m=+93.651205992" watchObservedRunningTime="2026-01-24 00:57:36.914389864 +0000 UTC m=+93.663091720" Jan 24 00:57:36.965670 kubelet[2883]: E0124 00:57:36.965293 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:37.545000 audit[5184]: NETFILTER_CFG table=filter:145 family=2 entries=14 op=nft_register_rule pid=5184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:37.545000 audit[5184]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff9bdbcac0 a2=0 a3=7fff9bdbcaac items=0 ppid=3048 pid=5184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:37.545000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:57:37.617000 audit[5184]: NETFILTER_CFG table=nat:146 family=2 entries=56 op=nft_register_chain pid=5184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:37.617000 audit[5184]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff9bdbcac0 a2=0 a3=7fff9bdbcaac items=0 ppid=3048 pid=5184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:37.617000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:57:37.789390 kubelet[2883]: E0124 00:57:37.789149 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:38.790776 kubelet[2883]: E0124 00:57:38.789213 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:38.971383 containerd[1609]: time="2026-01-24T00:57:38.971328862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:57:39.054277 containerd[1609]: time="2026-01-24T00:57:39.052197924Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:39.056685 containerd[1609]: time="2026-01-24T00:57:39.055986570Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:57:39.056685 containerd[1609]: time="2026-01-24T00:57:39.056234442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:39.059329 kubelet[2883]: E0124 00:57:39.057949 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:57:39.059329 kubelet[2883]: E0124 00:57:39.058051 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:57:39.059329 kubelet[2883]: E0124 00:57:39.058362 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e3b1510d29974db1a1191d4e38011034,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xv92s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58d88bd994-v27xr_calico-system(ae809202-0be0-4f65-b3c1-0018455a5691): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:39.061058 containerd[1609]: time="2026-01-24T00:57:39.060870533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:57:39.139172 containerd[1609]: time="2026-01-24T00:57:39.138355151Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:39.143893 containerd[1609]: time="2026-01-24T00:57:39.143679061Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:57:39.143893 containerd[1609]: time="2026-01-24T00:57:39.143809397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:39.144292 kubelet[2883]: E0124 00:57:39.144162 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:57:39.144292 kubelet[2883]: E0124 00:57:39.144252 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:57:39.144498 kubelet[2883]: E0124 00:57:39.144394 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xv92s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58d88bd994-v27xr_calico-system(ae809202-0be0-4f65-b3c1-0018455a5691): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:39.146640 kubelet[2883]: E0124 00:57:39.146557 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691" Jan 24 00:57:41.977257 containerd[1609]: time="2026-01-24T00:57:41.976884406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:57:42.126589 containerd[1609]: time="2026-01-24T00:57:42.126318177Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:42.144791 containerd[1609]: time="2026-01-24T00:57:42.142039426Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:57:42.144791 containerd[1609]: time="2026-01-24T00:57:42.142165131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:42.144972 kubelet[2883]: E0124 00:57:42.142387 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:57:42.144972 kubelet[2883]: E0124 00:57:42.142668 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:57:42.144972 kubelet[2883]: E0124 00:57:42.143022 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mh58h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rkd9m_calico-system(e6e0379d-4209-43c1-9c94-53533c368367): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:42.155789 containerd[1609]: time="2026-01-24T00:57:42.153910920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:57:42.248206 containerd[1609]: time="2026-01-24T00:57:42.244961967Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:42.252959 containerd[1609]: time="2026-01-24T00:57:42.252827046Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:57:42.252959 containerd[1609]: time="2026-01-24T00:57:42.252949836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:42.254110 kubelet[2883]: E0124 00:57:42.253500 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:57:42.254110 kubelet[2883]: E0124 00:57:42.253557 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:57:42.254110 kubelet[2883]: E0124 00:57:42.253874 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mh58h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rkd9m_calico-system(e6e0379d-4209-43c1-9c94-53533c368367): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:42.256463 kubelet[2883]: E0124 00:57:42.256307 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:57:47.984654 containerd[1609]: time="2026-01-24T00:57:47.982202656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:57:48.119547 containerd[1609]: time="2026-01-24T00:57:48.117968812Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:48.129131 containerd[1609]: time="2026-01-24T00:57:48.128437645Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:57:48.129131 containerd[1609]: time="2026-01-24T00:57:48.128855592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:48.131331 kubelet[2883]: E0124 00:57:48.130900 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:48.131331 kubelet[2883]: E0124 00:57:48.131005 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:48.131331 kubelet[2883]: E0124 00:57:48.131282 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s5w2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5985c58466-q852p_calico-apiserver(6f07ac71-f9bf-4f16-8022-eeee9f625fbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:48.133245 kubelet[2883]: E0124 00:57:48.132976 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 00:57:49.975858 kubelet[2883]: E0124 00:57:49.975069 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 00:57:49.983932 containerd[1609]: time="2026-01-24T00:57:49.983856630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:57:50.070822 containerd[1609]: time="2026-01-24T00:57:50.069168094Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:50.074423 containerd[1609]: time="2026-01-24T00:57:50.074278198Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:57:50.074423 containerd[1609]: time="2026-01-24T00:57:50.074392522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:50.076809 kubelet[2883]: E0124 00:57:50.075503 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:50.076809 kubelet[2883]: E0124 00:57:50.075557 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:50.078258 kubelet[2883]: E0124 00:57:50.078123 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f86n6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5985c58466-tkqwx_calico-apiserver(5a9025b1-4c1d-4d71-8add-e1566c4e04cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:50.081688 kubelet[2883]: E0124 00:57:50.081471 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 00:57:50.979506 kubelet[2883]: E0124 00:57:50.977478 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691" Jan 24 00:57:52.000273 containerd[1609]: time="2026-01-24T00:57:51.997299910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:57:52.108166 containerd[1609]: time="2026-01-24T00:57:52.104863869Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:52.112243 containerd[1609]: time="2026-01-24T00:57:52.112074139Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:57:52.112243 containerd[1609]: time="2026-01-24T00:57:52.112208913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:52.112444 kubelet[2883]: E0124 00:57:52.112339 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:57:52.112444 kubelet[2883]: E0124 00:57:52.112385 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:57:52.115092 kubelet[2883]: E0124 00:57:52.112524 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8624g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5b9dc86db-sl4tg_calico-system(70bde68b-f37d-4bad-bf48-1635753f011a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:52.118258 kubelet[2883]: E0124 00:57:52.118144 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 00:57:53.897191 kubelet[2883]: E0124 00:57:53.896385 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:54.985291 kubelet[2883]: E0124 00:57:54.985115 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:58:01.973126 kubelet[2883]: E0124 00:58:01.973029 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 00:58:02.969657 kubelet[2883]: E0124 00:58:02.969520 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 00:58:02.982134 containerd[1609]: time="2026-01-24T00:58:02.977659457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:58:03.076071 containerd[1609]: time="2026-01-24T00:58:03.075981693Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:58:03.081300 containerd[1609]: time="2026-01-24T00:58:03.080432172Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:58:03.081300 containerd[1609]: time="2026-01-24T00:58:03.081029024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:58:03.081599 kubelet[2883]: E0124 00:58:03.080823 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:58:03.081599 kubelet[2883]: E0124 00:58:03.080886 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:58:03.081599 kubelet[2883]: E0124 00:58:03.081065 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lxh5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2256s_calico-system(9fdbb8ee-a6f4-499c-b584-8b75c3240604): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:03.084594 kubelet[2883]: E0124 00:58:03.083056 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 00:58:05.979682 containerd[1609]: time="2026-01-24T00:58:05.978936591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:58:06.071452 containerd[1609]: time="2026-01-24T00:58:06.071356939Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:58:06.080514 containerd[1609]: time="2026-01-24T00:58:06.080337310Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:58:06.080514 containerd[1609]: time="2026-01-24T00:58:06.080479917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:58:06.081585 kubelet[2883]: E0124 00:58:06.081526 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:58:06.082257 kubelet[2883]: E0124 00:58:06.081590 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:58:06.082257 kubelet[2883]: E0124 00:58:06.082016 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e3b1510d29974db1a1191d4e38011034,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xv92s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58d88bd994-v27xr_calico-system(ae809202-0be0-4f65-b3c1-0018455a5691): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:06.088537 containerd[1609]: time="2026-01-24T00:58:06.087923581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:58:06.184410 containerd[1609]: time="2026-01-24T00:58:06.184065313Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:58:06.189115 containerd[1609]: time="2026-01-24T00:58:06.188967461Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:58:06.189226 containerd[1609]: time="2026-01-24T00:58:06.189113270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 00:58:06.189598 kubelet[2883]: E0124 00:58:06.189444 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:58:06.190367 kubelet[2883]: E0124 00:58:06.189597 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:58:06.192322 kubelet[2883]: E0124 00:58:06.192206 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xv92s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58d88bd994-v27xr_calico-system(ae809202-0be0-4f65-b3c1-0018455a5691): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:06.195236 kubelet[2883]: E0124 00:58:06.193800 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691" Jan 24 00:58:06.982786 kubelet[2883]: E0124 00:58:06.978901 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 00:58:07.967351 kubelet[2883]: E0124 00:58:07.967125 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:09.983803 containerd[1609]: time="2026-01-24T00:58:09.983676440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:58:10.081195 containerd[1609]: time="2026-01-24T00:58:10.081144983Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:58:10.091400 containerd[1609]: time="2026-01-24T00:58:10.090932302Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:58:10.091400 containerd[1609]: time="2026-01-24T00:58:10.091050804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:58:10.092049 kubelet[2883]: E0124 00:58:10.091932 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:58:10.092049 kubelet[2883]: E0124 00:58:10.092030 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:58:10.092697 kubelet[2883]: E0124 00:58:10.092273 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mh58h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rkd9m_calico-system(e6e0379d-4209-43c1-9c94-53533c368367): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:10.098349 containerd[1609]: time="2026-01-24T00:58:10.097491040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:58:10.198300 containerd[1609]: time="2026-01-24T00:58:10.198207131Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:58:10.201906 containerd[1609]: time="2026-01-24T00:58:10.201475235Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:58:10.201906 containerd[1609]: time="2026-01-24T00:58:10.201618914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:58:10.202079 kubelet[2883]: E0124 00:58:10.201951 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:58:10.202079 kubelet[2883]: E0124 00:58:10.202010 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:58:10.202198 kubelet[2883]: E0124 00:58:10.202148 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mh58h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rkd9m_calico-system(e6e0379d-4209-43c1-9c94-53533c368367): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:10.204569 kubelet[2883]: E0124 00:58:10.204503 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:58:14.991145 containerd[1609]: time="2026-01-24T00:58:14.989093306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:58:14.995381 kubelet[2883]: E0124 00:58:14.990136 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 00:58:15.109349 containerd[1609]: time="2026-01-24T00:58:15.109260243Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:58:15.112075 containerd[1609]: time="2026-01-24T00:58:15.111901440Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:58:15.112218 containerd[1609]: time="2026-01-24T00:58:15.112078822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:58:15.113029 kubelet[2883]: E0124 00:58:15.112320 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:15.113029 kubelet[2883]: E0124 00:58:15.112391 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:15.113029 kubelet[2883]: E0124 00:58:15.112549 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f86n6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5985c58466-tkqwx_calico-apiserver(5a9025b1-4c1d-4d71-8add-e1566c4e04cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:15.115093 kubelet[2883]: E0124 00:58:15.114594 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 00:58:17.989073 containerd[1609]: time="2026-01-24T00:58:17.985615915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:58:18.085591 containerd[1609]: time="2026-01-24T00:58:18.085458487Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:58:18.097166 containerd[1609]: time="2026-01-24T00:58:18.095149449Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:58:18.097166 containerd[1609]: time="2026-01-24T00:58:18.095319307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:58:18.099311 kubelet[2883]: E0124 00:58:18.095538 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:18.099311 kubelet[2883]: E0124 00:58:18.095599 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:18.099311 kubelet[2883]: E0124 00:58:18.095871 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s5w2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5985c58466-q852p_calico-apiserver(6f07ac71-f9bf-4f16-8022-eeee9f625fbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:18.099311 kubelet[2883]: E0124 00:58:18.097687 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 00:58:18.974617 containerd[1609]: time="2026-01-24T00:58:18.973892495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:58:19.050644 containerd[1609]: time="2026-01-24T00:58:19.050288984Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:58:19.058434 containerd[1609]: time="2026-01-24T00:58:19.057874344Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:58:19.058434 containerd[1609]: time="2026-01-24T00:58:19.058099178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 00:58:19.059055 kubelet[2883]: E0124 00:58:19.058937 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:58:19.060611 kubelet[2883]: E0124 00:58:19.060126 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:58:19.060611 kubelet[2883]: E0124 00:58:19.060505 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8624g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5b9dc86db-sl4tg_calico-system(70bde68b-f37d-4bad-bf48-1635753f011a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:19.062241 kubelet[2883]: E0124 00:58:19.062155 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 00:58:20.983256 kubelet[2883]: E0124 00:58:20.982956 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691" Jan 24 00:58:21.987473 kubelet[2883]: E0124 00:58:21.986639 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:58:25.981818 kubelet[2883]: E0124 00:58:25.981293 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 00:58:28.972273 kubelet[2883]: E0124 00:58:28.968896 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 00:58:28.972273 kubelet[2883]: E0124 00:58:28.970544 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 00:58:29.979264 kubelet[2883]: E0124 00:58:29.979211 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 00:58:31.992317 kubelet[2883]: E0124 00:58:31.991422 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:31.995831 kubelet[2883]: E0124 00:58:31.995225 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:32.001966 kubelet[2883]: E0124 00:58:32.001919 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691" Jan 24 00:58:33.993401 kubelet[2883]: E0124 00:58:33.992946 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:58:34.967812 kubelet[2883]: E0124 00:58:34.966572 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:36.970412 kubelet[2883]: E0124 00:58:36.968871 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 00:58:41.986612 kubelet[2883]: E0124 00:58:41.986525 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 00:58:42.972462 kubelet[2883]: E0124 00:58:42.971922 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 00:58:42.973407 kubelet[2883]: E0124 00:58:42.973146 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691" Jan 24 00:58:43.973797 kubelet[2883]: E0124 00:58:43.973262 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 00:58:44.969592 kubelet[2883]: E0124 00:58:44.969138 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:46.973025 kubelet[2883]: E0124 00:58:46.972961 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:58:48.978353 kubelet[2883]: E0124 00:58:48.978274 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 00:58:49.223801 update_engine[1588]: I20260124 00:58:49.221983 1588 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 24 00:58:49.223801 update_engine[1588]: I20260124 00:58:49.223357 1588 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 24 00:58:49.226634 update_engine[1588]: I20260124 00:58:49.225275 1588 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 24 00:58:49.230320 update_engine[1588]: I20260124 00:58:49.227540 1588 omaha_request_params.cc:62] Current group set to alpha Jan 24 00:58:49.230320 update_engine[1588]: I20260124 00:58:49.228253 1588 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 24 00:58:49.230320 update_engine[1588]: I20260124 00:58:49.228278 1588 update_attempter.cc:643] Scheduling an action processor start. Jan 24 00:58:49.230320 update_engine[1588]: I20260124 00:58:49.228425 1588 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 24 00:58:49.230320 update_engine[1588]: I20260124 00:58:49.228513 1588 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 24 00:58:49.230320 update_engine[1588]: I20260124 00:58:49.228600 1588 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 24 00:58:49.230320 update_engine[1588]: I20260124 00:58:49.228615 1588 omaha_request_action.cc:272] Request: Jan 24 00:58:49.230320 update_engine[1588]: Jan 24 00:58:49.230320 update_engine[1588]: Jan 24 00:58:49.230320 update_engine[1588]: Jan 24 00:58:49.230320 update_engine[1588]: Jan 24 00:58:49.230320 update_engine[1588]: Jan 24 00:58:49.230320 update_engine[1588]: Jan 24 00:58:49.230320 update_engine[1588]: Jan 24 00:58:49.230320 update_engine[1588]: Jan 24 00:58:49.230320 update_engine[1588]: I20260124 00:58:49.228626 1588 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:58:49.274968 update_engine[1588]: I20260124 00:58:49.273912 1588 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:58:49.274968 update_engine[1588]: I20260124 00:58:49.274868 1588 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:58:49.298549 locksmithd[1649]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 24 00:58:49.301683 update_engine[1588]: E20260124 00:58:49.300863 1588 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 24 00:58:49.301683 update_engine[1588]: I20260124 00:58:49.301014 1588 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 24 00:58:54.967191 kubelet[2883]: E0124 00:58:54.966976 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:54.969071 kubelet[2883]: E0124 00:58:54.968847 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 00:58:55.975580 containerd[1609]: time="2026-01-24T00:58:55.974956578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:58:56.071369 containerd[1609]: time="2026-01-24T00:58:56.071048054Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:58:56.095426 containerd[1609]: time="2026-01-24T00:58:56.087565127Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:58:56.095426 containerd[1609]: time="2026-01-24T00:58:56.087622405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:58:56.095426 containerd[1609]: time="2026-01-24T00:58:56.095203535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:58:56.095941 kubelet[2883]: E0124 00:58:56.090187 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:58:56.095941 kubelet[2883]: E0124 00:58:56.090241 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:58:56.095941 kubelet[2883]: E0124 00:58:56.090428 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e3b1510d29974db1a1191d4e38011034,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xv92s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58d88bd994-v27xr_calico-system(ae809202-0be0-4f65-b3c1-0018455a5691): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:56.177176 containerd[1609]: time="2026-01-24T00:58:56.177113893Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:58:56.182968 containerd[1609]: time="2026-01-24T00:58:56.182884255Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:58:56.183963 containerd[1609]: time="2026-01-24T00:58:56.183203653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 00:58:56.185990 kubelet[2883]: E0124 00:58:56.184602 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:58:56.186607 kubelet[2883]: E0124 00:58:56.186345 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:58:56.189604 kubelet[2883]: E0124 00:58:56.188086 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xv92s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58d88bd994-v27xr_calico-system(ae809202-0be0-4f65-b3c1-0018455a5691): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:56.190382 kubelet[2883]: E0124 00:58:56.190170 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691" Jan 24 00:58:56.967923 kubelet[2883]: E0124 00:58:56.967597 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 00:58:57.973950 containerd[1609]: time="2026-01-24T00:58:57.973364283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:58:58.079457 kernel: kauditd_printk_skb: 130 callbacks suppressed Jan 24 00:58:58.079605 kernel: audit: type=1130 audit(1769216338.059:756): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.104:22-10.0.0.1:60572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:58.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.104:22-10.0.0.1:60572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:58.060199 systemd[1]: Started sshd@7-10.0.0.104:22-10.0.0.1:60572.service - OpenSSH per-connection server daemon (10.0.0.1:60572). Jan 24 00:58:58.080455 containerd[1609]: time="2026-01-24T00:58:58.065352338Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:58:58.080455 containerd[1609]: time="2026-01-24T00:58:58.076983497Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:58:58.080455 containerd[1609]: time="2026-01-24T00:58:58.077110203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:58:58.080606 kubelet[2883]: E0124 00:58:58.079431 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:58:58.080606 kubelet[2883]: E0124 00:58:58.079493 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:58:58.080606 kubelet[2883]: E0124 00:58:58.079837 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mh58h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rkd9m_calico-system(e6e0379d-4209-43c1-9c94-53533c368367): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:58.083546 containerd[1609]: time="2026-01-24T00:58:58.083517650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:58:58.175447 containerd[1609]: time="2026-01-24T00:58:58.172557544Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:58:58.176222 containerd[1609]: time="2026-01-24T00:58:58.176139408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:58:58.176384 containerd[1609]: time="2026-01-24T00:58:58.176241219Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:58:58.177080 kubelet[2883]: E0124 00:58:58.176968 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:58:58.177080 kubelet[2883]: E0124 00:58:58.177073 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:58:58.202417 kubelet[2883]: E0124 00:58:58.201621 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lxh5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2256s_calico-system(9fdbb8ee-a6f4-499c-b584-8b75c3240604): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:58.211457 kubelet[2883]: E0124 00:58:58.210199 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 00:58:58.211624 containerd[1609]: time="2026-01-24T00:58:58.210656914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:58:58.323570 containerd[1609]: time="2026-01-24T00:58:58.322841083Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:58:58.329247 containerd[1609]: time="2026-01-24T00:58:58.328657029Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:58:58.329565 containerd[1609]: time="2026-01-24T00:58:58.328681666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:58:58.337652 kubelet[2883]: E0124 00:58:58.331224 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:58:58.338045 kubelet[2883]: E0124 00:58:58.338003 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:58:58.341510 kubelet[2883]: E0124 00:58:58.339019 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mh58h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rkd9m_calico-system(e6e0379d-4209-43c1-9c94-53533c368367): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:58.343591 kubelet[2883]: E0124 00:58:58.343492 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:58:58.586000 audit[5315]: USER_ACCT pid=5315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:58.592442 sshd-session[5315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:58.596891 sshd[5315]: Accepted publickey for core from 10.0.0.1 port 60572 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 00:58:58.612800 kernel: audit: type=1101 audit(1769216338.586:757): pid=5315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:58.590000 audit[5315]: CRED_ACQ pid=5315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:58.622404 systemd-logind[1585]: New session 9 of user core. Jan 24 00:58:58.639617 kernel: audit: type=1103 audit(1769216338.590:758): pid=5315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:58.639883 kernel: audit: type=1006 audit(1769216338.590:759): pid=5315 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 24 00:58:58.590000 audit[5315]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd78e1e120 a2=3 a3=0 items=0 ppid=1 pid=5315 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:58:58.680837 kernel: audit: type=1300 audit(1769216338.590:759): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd78e1e120 a2=3 a3=0 items=0 ppid=1 pid=5315 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:58:58.590000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:58:58.692138 kernel: audit: type=1327 audit(1769216338.590:759): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:58:58.693675 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:58:58.707000 audit[5315]: USER_START pid=5315 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:58.758361 kernel: audit: type=1105 audit(1769216338.707:760): pid=5315 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:58.758514 kernel: audit: type=1103 audit(1769216338.716:761): pid=5319 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:58.716000 audit[5319]: CRED_ACQ pid=5319 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:59.336080 sshd[5319]: Connection closed by 10.0.0.1 port 60572 Jan 24 00:58:59.337820 sshd-session[5315]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:59.343000 audit[5315]: USER_END pid=5315 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:59.343000 audit[5315]: CRED_DISP pid=5315 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:59.384562 systemd[1]: sshd@7-10.0.0.104:22-10.0.0.1:60572.service: Deactivated successfully. Jan 24 00:58:59.396878 kernel: audit: type=1106 audit(1769216339.343:762): pid=5315 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:59.396970 kernel: audit: type=1104 audit(1769216339.343:763): pid=5315 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:59.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.104:22-10.0.0.1:60572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:59.399407 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:58:59.404462 systemd-logind[1585]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:58:59.407150 systemd-logind[1585]: Removed session 9. Jan 24 00:58:59.947960 update_engine[1588]: I20260124 00:58:59.947852 1588 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:58:59.947960 update_engine[1588]: I20260124 00:58:59.947959 1588 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:58:59.952778 update_engine[1588]: I20260124 00:58:59.952600 1588 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:58:59.982523 update_engine[1588]: E20260124 00:58:59.980908 1588 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 24 00:58:59.983596 update_engine[1588]: I20260124 00:58:59.983440 1588 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 24 00:59:00.990568 containerd[1609]: time="2026-01-24T00:59:00.990021049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:59:01.092889 containerd[1609]: time="2026-01-24T00:59:01.090249497Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:59:01.110740 containerd[1609]: time="2026-01-24T00:59:01.110418737Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:59:01.111238 containerd[1609]: time="2026-01-24T00:59:01.111110139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:59:01.115203 kubelet[2883]: E0124 00:59:01.112381 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:59:01.115203 kubelet[2883]: E0124 00:59:01.112448 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:59:01.115203 kubelet[2883]: E0124 00:59:01.112809 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f86n6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5985c58466-tkqwx_calico-apiserver(5a9025b1-4c1d-4d71-8add-e1566c4e04cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:59:01.119012 kubelet[2883]: E0124 00:59:01.117609 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 00:59:01.977220 kubelet[2883]: E0124 00:59:01.975868 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:04.387211 systemd[1]: Started sshd@8-10.0.0.104:22-10.0.0.1:37450.service - OpenSSH per-connection server daemon (10.0.0.1:37450). Jan 24 00:59:04.413992 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:59:04.417540 kernel: audit: type=1130 audit(1769216344.385:765): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.104:22-10.0.0.1:37450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:04.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.104:22-10.0.0.1:37450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:04.714000 audit[5344]: USER_ACCT pid=5344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:04.759449 sshd[5344]: Accepted publickey for core from 10.0.0.1 port 37450 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 00:59:04.771399 sshd-session[5344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:04.778310 kernel: audit: type=1101 audit(1769216344.714:766): pid=5344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:04.821872 kernel: audit: type=1103 audit(1769216344.760:767): pid=5344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:04.760000 audit[5344]: CRED_ACQ pid=5344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:04.823929 systemd-logind[1585]: New session 10 of user core. Jan 24 00:59:04.875224 kernel: audit: type=1006 audit(1769216344.760:768): pid=5344 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 24 00:59:04.875425 kernel: audit: type=1300 audit(1769216344.760:768): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffaa2410b0 a2=3 a3=0 items=0 ppid=1 pid=5344 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:04.760000 audit[5344]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffaa2410b0 a2=3 a3=0 items=0 ppid=1 pid=5344 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:04.876536 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:59:04.760000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:04.944221 kernel: audit: type=1327 audit(1769216344.760:768): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:04.931000 audit[5344]: USER_START pid=5344 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:04.993407 kernel: audit: type=1105 audit(1769216344.931:769): pid=5344 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:04.969000 audit[5354]: CRED_ACQ pid=5354 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:05.033166 kernel: audit: type=1103 audit(1769216344.969:770): pid=5354 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:05.610824 sshd[5354]: Connection closed by 10.0.0.1 port 37450 Jan 24 00:59:05.610287 sshd-session[5344]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:05.612000 audit[5344]: USER_END pid=5344 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:05.642070 systemd[1]: sshd@8-10.0.0.104:22-10.0.0.1:37450.service: Deactivated successfully. Jan 24 00:59:05.650468 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:59:05.613000 audit[5344]: CRED_DISP pid=5344 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:05.668073 systemd-logind[1585]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:59:05.679315 systemd-logind[1585]: Removed session 10. Jan 24 00:59:05.692912 kernel: audit: type=1106 audit(1769216345.612:771): pid=5344 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:05.693058 kernel: audit: type=1104 audit(1769216345.613:772): pid=5344 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:05.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.104:22-10.0.0.1:37450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:05.985570 kubelet[2883]: E0124 00:59:05.977836 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:05.986300 containerd[1609]: time="2026-01-24T00:59:05.979840981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:59:06.091940 containerd[1609]: time="2026-01-24T00:59:06.089197059Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:59:06.095793 containerd[1609]: time="2026-01-24T00:59:06.092210339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:59:06.095793 containerd[1609]: time="2026-01-24T00:59:06.092284828Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:59:06.096948 kubelet[2883]: E0124 00:59:06.093876 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:59:06.096948 kubelet[2883]: E0124 00:59:06.093931 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:59:06.096948 kubelet[2883]: E0124 00:59:06.094081 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s5w2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5985c58466-q852p_calico-apiserver(6f07ac71-f9bf-4f16-8022-eeee9f625fbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:59:06.096948 kubelet[2883]: E0124 00:59:06.095698 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 00:59:08.970546 kubelet[2883]: E0124 00:59:08.969828 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:09.952254 update_engine[1588]: I20260124 00:59:09.950845 1588 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:59:09.952254 update_engine[1588]: I20260124 00:59:09.951059 1588 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:59:09.952254 update_engine[1588]: I20260124 00:59:09.952142 1588 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:59:09.976018 update_engine[1588]: E20260124 00:59:09.975839 1588 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 24 00:59:09.976018 update_engine[1588]: I20260124 00:59:09.975966 1588 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 24 00:59:10.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.104:22-10.0.0.1:37458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:10.655836 systemd[1]: Started sshd@9-10.0.0.104:22-10.0.0.1:37458.service - OpenSSH per-connection server daemon (10.0.0.1:37458). Jan 24 00:59:10.663813 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:59:10.663928 kernel: audit: type=1130 audit(1769216350.655:774): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.104:22-10.0.0.1:37458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:10.856000 audit[5373]: USER_ACCT pid=5373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:10.863495 sshd-session[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:10.870951 sshd[5373]: Accepted publickey for core from 10.0.0.1 port 37458 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 00:59:10.860000 audit[5373]: CRED_ACQ pid=5373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:10.901027 kernel: audit: type=1101 audit(1769216350.856:775): pid=5373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:10.901137 kernel: audit: type=1103 audit(1769216350.860:776): pid=5373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:10.910664 kernel: audit: type=1006 audit(1769216350.860:777): pid=5373 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 24 00:59:10.910438 systemd-logind[1585]: New session 11 of user core. Jan 24 00:59:10.860000 audit[5373]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0ad0bcd0 a2=3 a3=0 items=0 ppid=1 pid=5373 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:10.860000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:10.937575 kernel: audit: type=1300 audit(1769216350.860:777): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0ad0bcd0 a2=3 a3=0 items=0 ppid=1 pid=5373 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:10.937777 kernel: audit: type=1327 audit(1769216350.860:777): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:10.941864 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:59:10.954000 audit[5373]: USER_START pid=5373 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:10.981820 kernel: audit: type=1105 audit(1769216350.954:778): pid=5373 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:10.961000 audit[5377]: CRED_ACQ pid=5377 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:11.000252 kernel: audit: type=1103 audit(1769216350.961:779): pid=5377 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:11.261921 sshd[5377]: Connection closed by 10.0.0.1 port 37458 Jan 24 00:59:11.263008 sshd-session[5373]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:11.270000 audit[5373]: USER_END pid=5373 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:11.283761 systemd[1]: sshd@9-10.0.0.104:22-10.0.0.1:37458.service: Deactivated successfully. Jan 24 00:59:11.292676 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:59:11.303227 systemd-logind[1585]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:59:11.307524 systemd-logind[1585]: Removed session 11. Jan 24 00:59:11.270000 audit[5373]: CRED_DISP pid=5373 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:11.341512 kernel: audit: type=1106 audit(1769216351.270:780): pid=5373 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:11.341663 kernel: audit: type=1104 audit(1769216351.270:781): pid=5373 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:11.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.104:22-10.0.0.1:37458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:11.994052 kubelet[2883]: E0124 00:59:11.990273 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 00:59:11.996008 containerd[1609]: time="2026-01-24T00:59:11.995566622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:59:11.997998 kubelet[2883]: E0124 00:59:11.997779 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691" Jan 24 00:59:12.096530 containerd[1609]: time="2026-01-24T00:59:12.096167168Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:59:12.103826 containerd[1609]: time="2026-01-24T00:59:12.103641113Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:59:12.103968 containerd[1609]: time="2026-01-24T00:59:12.103837729Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 00:59:12.107229 kubelet[2883]: E0124 00:59:12.104779 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:59:12.107692 kubelet[2883]: E0124 00:59:12.107337 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:59:12.108303 kubelet[2883]: E0124 00:59:12.107945 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8624g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5b9dc86db-sl4tg_calico-system(70bde68b-f37d-4bad-bf48-1635753f011a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:59:12.110750 kubelet[2883]: E0124 00:59:12.110461 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 00:59:12.989875 kubelet[2883]: E0124 00:59:12.986259 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:59:13.977832 kubelet[2883]: E0124 00:59:13.976547 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 00:59:16.300100 systemd[1]: Started sshd@10-10.0.0.104:22-10.0.0.1:44610.service - OpenSSH per-connection server daemon (10.0.0.1:44610). Jan 24 00:59:16.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.104:22-10.0.0.1:44610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:16.316616 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:59:16.316788 kernel: audit: type=1130 audit(1769216356.299:783): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.104:22-10.0.0.1:44610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:16.426000 audit[5393]: USER_ACCT pid=5393 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:16.445044 sshd[5393]: Accepted publickey for core from 10.0.0.1 port 44610 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 00:59:16.449602 sshd-session[5393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:16.458814 systemd-logind[1585]: New session 12 of user core. Jan 24 00:59:16.460057 kernel: audit: type=1101 audit(1769216356.426:784): pid=5393 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:16.460091 kernel: audit: type=1103 audit(1769216356.446:785): pid=5393 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:16.446000 audit[5393]: CRED_ACQ pid=5393 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:16.486468 kernel: audit: type=1006 audit(1769216356.446:786): pid=5393 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 24 00:59:16.446000 audit[5393]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffccf4b0530 a2=3 a3=0 items=0 ppid=1 pid=5393 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:16.488916 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:59:16.520585 kernel: audit: type=1300 audit(1769216356.446:786): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffccf4b0530 a2=3 a3=0 items=0 ppid=1 pid=5393 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:16.446000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:16.555811 kernel: audit: type=1327 audit(1769216356.446:786): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:16.555962 kernel: audit: type=1105 audit(1769216356.498:787): pid=5393 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:16.498000 audit[5393]: USER_START pid=5393 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:16.506000 audit[5397]: CRED_ACQ pid=5397 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:16.603975 kernel: audit: type=1103 audit(1769216356.506:788): pid=5397 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:16.891673 sshd[5397]: Connection closed by 10.0.0.1 port 44610 Jan 24 00:59:16.895038 sshd-session[5393]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:16.902000 audit[5393]: USER_END pid=5393 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:16.902000 audit[5393]: CRED_DISP pid=5393 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:16.944384 systemd[1]: sshd@10-10.0.0.104:22-10.0.0.1:44610.service: Deactivated successfully. Jan 24 00:59:16.948048 kernel: audit: type=1106 audit(1769216356.902:789): pid=5393 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:16.948144 kernel: audit: type=1104 audit(1769216356.902:790): pid=5393 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:16.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.104:22-10.0.0.1:44610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:16.952122 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:59:16.960932 systemd-logind[1585]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:59:16.967660 systemd-logind[1585]: Removed session 12. Jan 24 00:59:19.952049 update_engine[1588]: I20260124 00:59:19.951689 1588 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:59:19.952049 update_engine[1588]: I20260124 00:59:19.951926 1588 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:59:19.952802 update_engine[1588]: I20260124 00:59:19.952556 1588 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:59:19.978837 update_engine[1588]: E20260124 00:59:19.977360 1588 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 24 00:59:19.978837 update_engine[1588]: I20260124 00:59:19.977540 1588 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 24 00:59:19.978837 update_engine[1588]: I20260124 00:59:19.977562 1588 omaha_request_action.cc:617] Omaha request response: Jan 24 00:59:19.978837 update_engine[1588]: E20260124 00:59:19.977895 1588 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 24 00:59:19.978837 update_engine[1588]: I20260124 00:59:19.977929 1588 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 24 00:59:19.978837 update_engine[1588]: I20260124 00:59:19.977939 1588 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 24 00:59:19.978837 update_engine[1588]: I20260124 00:59:19.977950 1588 update_attempter.cc:306] Processing Done. Jan 24 00:59:19.978837 update_engine[1588]: E20260124 00:59:19.977972 1588 update_attempter.cc:619] Update failed. Jan 24 00:59:19.978837 update_engine[1588]: I20260124 00:59:19.977981 1588 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 24 00:59:19.978837 update_engine[1588]: I20260124 00:59:19.977990 1588 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 24 00:59:19.978837 update_engine[1588]: I20260124 00:59:19.978000 1588 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 24 00:59:19.978837 update_engine[1588]: I20260124 00:59:19.978083 1588 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 24 00:59:19.978837 update_engine[1588]: I20260124 00:59:19.978111 1588 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 24 00:59:19.978837 update_engine[1588]: I20260124 00:59:19.978120 1588 omaha_request_action.cc:272] Request: Jan 24 00:59:19.978837 update_engine[1588]: Jan 24 00:59:19.978837 update_engine[1588]: Jan 24 00:59:19.979581 update_engine[1588]: Jan 24 00:59:19.979581 update_engine[1588]: Jan 24 00:59:19.979581 update_engine[1588]: Jan 24 00:59:19.979581 update_engine[1588]: Jan 24 00:59:19.979581 update_engine[1588]: I20260124 00:59:19.978130 1588 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:59:19.979581 update_engine[1588]: I20260124 00:59:19.978165 1588 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:59:19.979581 update_engine[1588]: I20260124 00:59:19.978775 1588 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:59:19.993037 locksmithd[1649]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 24 00:59:20.007582 update_engine[1588]: E20260124 00:59:20.007510 1588 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 24 00:59:20.007963 update_engine[1588]: I20260124 00:59:20.007921 1588 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 24 00:59:20.008385 update_engine[1588]: I20260124 00:59:20.008067 1588 omaha_request_action.cc:617] Omaha request response: Jan 24 00:59:20.008385 update_engine[1588]: I20260124 00:59:20.008129 1588 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 24 00:59:20.008385 update_engine[1588]: I20260124 00:59:20.008149 1588 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 24 00:59:20.008385 update_engine[1588]: I20260124 00:59:20.008160 1588 update_attempter.cc:306] Processing Done. Jan 24 00:59:20.008385 update_engine[1588]: I20260124 00:59:20.008174 1588 update_attempter.cc:310] Error event sent. Jan 24 00:59:20.008385 update_engine[1588]: I20260124 00:59:20.008193 1588 update_check_scheduler.cc:74] Next update check in 41m59s Jan 24 00:59:20.009623 locksmithd[1649]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 24 00:59:20.966789 kubelet[2883]: E0124 00:59:20.966535 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 00:59:21.923690 systemd[1]: Started sshd@11-10.0.0.104:22-10.0.0.1:44616.service - OpenSSH per-connection server daemon (10.0.0.1:44616). Jan 24 00:59:21.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.104:22-10.0.0.1:44616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:21.942834 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:59:21.942915 kernel: audit: type=1130 audit(1769216361.923:792): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.104:22-10.0.0.1:44616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:22.117000 audit[5412]: USER_ACCT pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:22.119019 sshd[5412]: Accepted publickey for core from 10.0.0.1 port 44616 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 00:59:22.122684 sshd-session[5412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:22.120000 audit[5412]: CRED_ACQ pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:22.147382 systemd-logind[1585]: New session 13 of user core. Jan 24 00:59:22.157414 kernel: audit: type=1101 audit(1769216362.117:793): pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:22.157544 kernel: audit: type=1103 audit(1769216362.120:794): pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:22.170785 kernel: audit: type=1006 audit(1769216362.120:795): pid=5412 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 24 00:59:22.170873 kernel: audit: type=1300 audit(1769216362.120:795): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe940ec850 a2=3 a3=0 items=0 ppid=1 pid=5412 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:22.120000 audit[5412]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe940ec850 a2=3 a3=0 items=0 ppid=1 pid=5412 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:22.120000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:22.197079 kernel: audit: type=1327 audit(1769216362.120:795): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:22.200265 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:59:22.215000 audit[5412]: USER_START pid=5412 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:22.244936 kernel: audit: type=1105 audit(1769216362.215:796): pid=5412 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:22.245077 kernel: audit: type=1103 audit(1769216362.224:797): pid=5416 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:22.224000 audit[5416]: CRED_ACQ pid=5416 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:22.576202 sshd[5416]: Connection closed by 10.0.0.1 port 44616 Jan 24 00:59:22.574184 sshd-session[5412]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:22.586000 audit[5412]: USER_END pid=5412 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:22.595304 systemd[1]: sshd@11-10.0.0.104:22-10.0.0.1:44616.service: Deactivated successfully. Jan 24 00:59:22.624789 kernel: audit: type=1106 audit(1769216362.586:798): pid=5412 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:22.607262 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:59:22.586000 audit[5412]: CRED_DISP pid=5412 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:22.628221 systemd-logind[1585]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:59:22.654488 systemd-logind[1585]: Removed session 13. Jan 24 00:59:22.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.104:22-10.0.0.1:44616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:22.672610 kernel: audit: type=1104 audit(1769216362.586:799): pid=5412 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:22.974475 kubelet[2883]: E0124 00:59:22.973973 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 00:59:23.990901 kubelet[2883]: E0124 00:59:23.986423 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:59:24.975602 kubelet[2883]: E0124 00:59:24.974928 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 00:59:24.976331 kubelet[2883]: E0124 00:59:24.975602 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691" Jan 24 00:59:25.981979 kubelet[2883]: E0124 00:59:25.978060 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 00:59:27.603307 systemd[1715]: Created slice background.slice - User Background Tasks Slice. Jan 24 00:59:27.603555 systemd[1]: Started sshd@12-10.0.0.104:22-10.0.0.1:39492.service - OpenSSH per-connection server daemon (10.0.0.1:39492). Jan 24 00:59:27.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.104:22-10.0.0.1:39492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:27.607075 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:59:27.607174 kernel: audit: type=1130 audit(1769216367.601:801): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.104:22-10.0.0.1:39492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:27.620415 systemd[1715]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 24 00:59:27.726126 systemd[1715]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 24 00:59:28.003000 audit[5455]: USER_ACCT pid=5455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:28.008389 sshd[5455]: Accepted publickey for core from 10.0.0.1 port 39492 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 00:59:28.019104 sshd-session[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:28.055769 kernel: audit: type=1101 audit(1769216368.003:802): pid=5455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:28.055912 kernel: audit: type=1103 audit(1769216368.003:803): pid=5455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:28.003000 audit[5455]: CRED_ACQ pid=5455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:28.078433 systemd-logind[1585]: New session 14 of user core. Jan 24 00:59:28.112962 kernel: audit: type=1006 audit(1769216368.008:804): pid=5455 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 24 00:59:28.119289 kernel: audit: type=1300 audit(1769216368.008:804): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe68d8c370 a2=3 a3=0 items=0 ppid=1 pid=5455 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:28.008000 audit[5455]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe68d8c370 a2=3 a3=0 items=0 ppid=1 pid=5455 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:28.127183 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:59:28.008000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:28.164637 kernel: audit: type=1327 audit(1769216368.008:804): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:28.158000 audit[5455]: USER_START pid=5455 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:28.199808 kernel: audit: type=1105 audit(1769216368.158:805): pid=5455 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:28.179000 audit[5461]: CRED_ACQ pid=5461 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:28.233046 kernel: audit: type=1103 audit(1769216368.179:806): pid=5461 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:28.579826 sshd[5461]: Connection closed by 10.0.0.1 port 39492 Jan 24 00:59:28.580355 sshd-session[5455]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:28.585000 audit[5455]: USER_END pid=5455 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:28.597131 systemd[1]: sshd@12-10.0.0.104:22-10.0.0.1:39492.service: Deactivated successfully. Jan 24 00:59:28.604619 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:59:28.608879 kernel: audit: type=1106 audit(1769216368.585:807): pid=5455 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:28.608984 kernel: audit: type=1104 audit(1769216368.586:808): pid=5455 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:28.586000 audit[5455]: CRED_DISP pid=5455 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:28.610683 systemd-logind[1585]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:59:28.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.104:22-10.0.0.1:39492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:28.627026 systemd-logind[1585]: Removed session 14. Jan 24 00:59:33.629365 systemd[1]: Started sshd@13-10.0.0.104:22-10.0.0.1:54942.service - OpenSSH per-connection server daemon (10.0.0.1:54942). Jan 24 00:59:33.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.104:22-10.0.0.1:54942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:33.652564 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:59:33.652975 kernel: audit: type=1130 audit(1769216373.628:810): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.104:22-10.0.0.1:54942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:33.887000 audit[5477]: USER_ACCT pid=5477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:33.889126 sshd[5477]: Accepted publickey for core from 10.0.0.1 port 54942 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 00:59:33.896376 sshd-session[5477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:33.891000 audit[5477]: CRED_ACQ pid=5477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:33.937352 kernel: audit: type=1101 audit(1769216373.887:811): pid=5477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:33.937680 kernel: audit: type=1103 audit(1769216373.891:812): pid=5477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:33.968770 kernel: audit: type=1006 audit(1769216373.891:813): pid=5477 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 24 00:59:33.891000 audit[5477]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe89876010 a2=3 a3=0 items=0 ppid=1 pid=5477 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:33.987556 systemd-logind[1585]: New session 15 of user core. Jan 24 00:59:33.994243 kernel: audit: type=1300 audit(1769216373.891:813): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe89876010 a2=3 a3=0 items=0 ppid=1 pid=5477 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:33.994349 kernel: audit: type=1327 audit(1769216373.891:813): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:33.891000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:33.995430 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:59:34.018000 audit[5477]: USER_START pid=5477 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:34.050955 kernel: audit: type=1105 audit(1769216374.018:814): pid=5477 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:34.051291 kubelet[2883]: E0124 00:59:34.051250 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 00:59:34.070805 kernel: audit: type=1103 audit(1769216374.039:815): pid=5481 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:34.039000 audit[5481]: CRED_ACQ pid=5481 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:34.388933 sshd[5481]: Connection closed by 10.0.0.1 port 54942 Jan 24 00:59:34.387013 sshd-session[5477]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:34.393000 audit[5477]: USER_END pid=5477 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:34.457693 kernel: audit: type=1106 audit(1769216374.393:816): pid=5477 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:34.457878 kernel: audit: type=1104 audit(1769216374.396:817): pid=5477 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:34.396000 audit[5477]: CRED_DISP pid=5477 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:34.428443 systemd-logind[1585]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:59:34.435140 systemd[1]: sshd@13-10.0.0.104:22-10.0.0.1:54942.service: Deactivated successfully. Jan 24 00:59:34.447683 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:59:34.462616 systemd-logind[1585]: Removed session 15. Jan 24 00:59:34.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.104:22-10.0.0.1:54942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:36.973592 kubelet[2883]: E0124 00:59:36.972215 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 00:59:36.973592 kubelet[2883]: E0124 00:59:36.972425 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691" Jan 24 00:59:37.973431 kubelet[2883]: E0124 00:59:37.973344 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:59:38.969902 kubelet[2883]: E0124 00:59:38.966556 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 00:59:39.442131 systemd[1]: Started sshd@14-10.0.0.104:22-10.0.0.1:54948.service - OpenSSH per-connection server daemon (10.0.0.1:54948). Jan 24 00:59:39.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.104:22-10.0.0.1:54948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:39.455293 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:59:39.455410 kernel: audit: type=1130 audit(1769216379.442:819): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.104:22-10.0.0.1:54948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:40.079813 sshd[5496]: Accepted publickey for core from 10.0.0.1 port 54948 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 00:59:40.078000 audit[5496]: USER_ACCT pid=5496 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:40.102575 sshd-session[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:40.125043 kernel: audit: type=1101 audit(1769216380.078:820): pid=5496 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:40.089000 audit[5496]: CRED_ACQ pid=5496 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:40.211589 kernel: audit: type=1103 audit(1769216380.089:821): pid=5496 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:40.211791 kernel: audit: type=1006 audit(1769216380.089:822): pid=5496 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 24 00:59:40.214465 systemd-logind[1585]: New session 16 of user core. Jan 24 00:59:40.089000 audit[5496]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf0462f20 a2=3 a3=0 items=0 ppid=1 pid=5496 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:40.268379 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:59:40.325586 kernel: audit: type=1300 audit(1769216380.089:822): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf0462f20 a2=3 a3=0 items=0 ppid=1 pid=5496 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:40.089000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:40.363804 kernel: audit: type=1327 audit(1769216380.089:822): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:40.306000 audit[5496]: USER_START pid=5496 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:40.416188 kernel: audit: type=1105 audit(1769216380.306:823): pid=5496 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:40.416304 kernel: audit: type=1103 audit(1769216380.317:824): pid=5502 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:40.317000 audit[5502]: CRED_ACQ pid=5502 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:40.970867 sshd[5502]: Connection closed by 10.0.0.1 port 54948 Jan 24 00:59:40.982069 sshd-session[5496]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:41.011625 kubelet[2883]: E0124 00:59:41.011079 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 00:59:41.021000 audit[5496]: USER_END pid=5496 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:41.069369 systemd-logind[1585]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:59:41.078031 systemd[1]: sshd@14-10.0.0.104:22-10.0.0.1:54948.service: Deactivated successfully. Jan 24 00:59:41.098475 kernel: audit: type=1106 audit(1769216381.021:825): pid=5496 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:41.021000 audit[5496]: CRED_DISP pid=5496 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:41.109923 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:59:41.131346 systemd-logind[1585]: Removed session 16. Jan 24 00:59:41.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.104:22-10.0.0.1:54948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:41.153980 kernel: audit: type=1104 audit(1769216381.021:826): pid=5496 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:44.978170 kubelet[2883]: E0124 00:59:44.978092 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 00:59:46.029793 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:59:46.029923 kernel: audit: type=1130 audit(1769216386.019:828): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.104:22-10.0.0.1:44162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:46.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.104:22-10.0.0.1:44162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:46.020322 systemd[1]: Started sshd@15-10.0.0.104:22-10.0.0.1:44162.service - OpenSSH per-connection server daemon (10.0.0.1:44162). Jan 24 00:59:46.306000 audit[5518]: USER_ACCT pid=5518 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:46.313997 sshd-session[5518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:46.339669 sshd[5518]: Accepted publickey for core from 10.0.0.1 port 44162 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 00:59:46.311000 audit[5518]: CRED_ACQ pid=5518 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:46.356696 systemd-logind[1585]: New session 17 of user core. Jan 24 00:59:46.380903 kernel: audit: type=1101 audit(1769216386.306:829): pid=5518 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:46.381082 kernel: audit: type=1103 audit(1769216386.311:830): pid=5518 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:46.386786 kernel: audit: type=1006 audit(1769216386.311:831): pid=5518 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 24 00:59:46.311000 audit[5518]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1a947450 a2=3 a3=0 items=0 ppid=1 pid=5518 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:46.462571 kernel: audit: type=1300 audit(1769216386.311:831): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1a947450 a2=3 a3=0 items=0 ppid=1 pid=5518 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:46.464951 kernel: audit: type=1327 audit(1769216386.311:831): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:46.311000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:46.486291 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:59:46.508000 audit[5518]: USER_START pid=5518 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:46.582110 kernel: audit: type=1105 audit(1769216386.508:832): pid=5518 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:46.582366 kernel: audit: type=1103 audit(1769216386.517:833): pid=5522 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:46.517000 audit[5522]: CRED_ACQ pid=5522 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:46.968793 kubelet[2883]: E0124 00:59:46.968154 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:47.182071 sshd[5522]: Connection closed by 10.0.0.1 port 44162 Jan 24 00:59:47.199424 sshd-session[5518]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:47.201000 audit[5518]: USER_END pid=5518 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:47.227272 systemd-logind[1585]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:59:47.229679 systemd[1]: sshd@15-10.0.0.104:22-10.0.0.1:44162.service: Deactivated successfully. Jan 24 00:59:47.207000 audit[5518]: CRED_DISP pid=5518 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:47.240949 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:59:47.247412 systemd-logind[1585]: Removed session 17. Jan 24 00:59:47.257110 kernel: audit: type=1106 audit(1769216387.201:834): pid=5518 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:47.257259 kernel: audit: type=1104 audit(1769216387.207:835): pid=5518 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:47.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.104:22-10.0.0.1:44162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:48.966826 kubelet[2883]: E0124 00:59:48.966451 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:49.989024 kubelet[2883]: E0124 00:59:49.988536 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 00:59:49.991104 kubelet[2883]: E0124 00:59:49.990287 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 00:59:50.001932 kubelet[2883]: E0124 00:59:50.001868 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691" Jan 24 00:59:50.008334 kubelet[2883]: E0124 00:59:50.002484 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 00:59:50.966834 kubelet[2883]: E0124 00:59:50.966052 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:52.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.104:22-10.0.0.1:44168 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:52.211181 systemd[1]: Started sshd@16-10.0.0.104:22-10.0.0.1:44168.service - OpenSSH per-connection server daemon (10.0.0.1:44168). Jan 24 00:59:52.233913 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:59:52.234024 kernel: audit: type=1130 audit(1769216392.210:837): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.104:22-10.0.0.1:44168 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:52.431199 sshd[5536]: Accepted publickey for core from 10.0.0.1 port 44168 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 00:59:52.429000 audit[5536]: USER_ACCT pid=5536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.441256 sshd-session[5536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:52.433000 audit[5536]: CRED_ACQ pid=5536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.465460 kernel: audit: type=1101 audit(1769216392.429:838): pid=5536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.466388 kernel: audit: type=1103 audit(1769216392.433:839): pid=5536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.466459 kernel: audit: type=1006 audit(1769216392.437:840): pid=5536 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 24 00:59:52.462802 systemd-logind[1585]: New session 18 of user core. Jan 24 00:59:52.475878 kernel: audit: type=1300 audit(1769216392.437:840): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb3927780 a2=3 a3=0 items=0 ppid=1 pid=5536 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:52.437000 audit[5536]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb3927780 a2=3 a3=0 items=0 ppid=1 pid=5536 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:52.504370 kernel: audit: type=1327 audit(1769216392.437:840): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:52.437000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:52.515905 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:59:52.523000 audit[5536]: USER_START pid=5536 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.534000 audit[5540]: CRED_ACQ pid=5540 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.559455 kernel: audit: type=1105 audit(1769216392.523:841): pid=5536 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.560910 kernel: audit: type=1103 audit(1769216392.534:842): pid=5540 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.717434 sshd[5540]: Connection closed by 10.0.0.1 port 44168 Jan 24 00:59:52.720000 sshd-session[5536]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:52.721000 audit[5536]: USER_END pid=5536 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.739817 kernel: audit: type=1106 audit(1769216392.721:843): pid=5536 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.740042 kernel: audit: type=1104 audit(1769216392.728:844): pid=5536 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.728000 audit[5536]: CRED_DISP pid=5536 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.104:22-10.0.0.1:44168 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:52.770442 systemd[1]: sshd@16-10.0.0.104:22-10.0.0.1:44168.service: Deactivated successfully. Jan 24 00:59:52.774305 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:59:52.778078 systemd-logind[1585]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:59:52.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.104:22-10.0.0.1:50192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:52.784234 systemd[1]: Started sshd@17-10.0.0.104:22-10.0.0.1:50192.service - OpenSSH per-connection server daemon (10.0.0.1:50192). Jan 24 00:59:52.786240 systemd-logind[1585]: Removed session 18. Jan 24 00:59:52.895000 audit[5554]: USER_ACCT pid=5554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.896995 sshd[5554]: Accepted publickey for core from 10.0.0.1 port 50192 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 00:59:52.898000 audit[5554]: CRED_ACQ pid=5554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.898000 audit[5554]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8d64b960 a2=3 a3=0 items=0 ppid=1 pid=5554 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:52.898000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:52.901459 sshd-session[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:52.919207 systemd-logind[1585]: New session 19 of user core. Jan 24 00:59:52.942222 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:59:52.950000 audit[5554]: USER_START pid=5554 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.958000 audit[5558]: CRED_ACQ pid=5558 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:53.312165 sshd[5558]: Connection closed by 10.0.0.1 port 50192 Jan 24 00:59:53.313995 sshd-session[5554]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:53.320000 audit[5554]: USER_END pid=5554 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:53.325000 audit[5554]: CRED_DISP pid=5554 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:53.345473 systemd[1]: sshd@17-10.0.0.104:22-10.0.0.1:50192.service: Deactivated successfully. Jan 24 00:59:53.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.104:22-10.0.0.1:50192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:53.367303 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:59:53.377211 systemd-logind[1585]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:59:53.384684 systemd-logind[1585]: Removed session 19. Jan 24 00:59:53.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.104:22-10.0.0.1:50208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:53.391636 systemd[1]: Started sshd@18-10.0.0.104:22-10.0.0.1:50208.service - OpenSSH per-connection server daemon (10.0.0.1:50208). Jan 24 00:59:53.555000 audit[5569]: USER_ACCT pid=5569 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:53.556757 sshd[5569]: Accepted publickey for core from 10.0.0.1 port 50208 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 00:59:53.560000 audit[5569]: CRED_ACQ pid=5569 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:53.560000 audit[5569]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4cd9ae90 a2=3 a3=0 items=0 ppid=1 pid=5569 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:53.560000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:53.562843 sshd-session[5569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:53.580152 systemd-logind[1585]: New session 20 of user core. Jan 24 00:59:53.591199 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 00:59:53.606000 audit[5569]: USER_START pid=5569 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:53.609000 audit[5596]: CRED_ACQ pid=5596 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:53.950152 sshd[5596]: Connection closed by 10.0.0.1 port 50208 Jan 24 00:59:53.952076 sshd-session[5569]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:53.958000 audit[5569]: USER_END pid=5569 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:53.958000 audit[5569]: CRED_DISP pid=5569 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:53.978420 systemd[1]: sshd@18-10.0.0.104:22-10.0.0.1:50208.service: Deactivated successfully. Jan 24 00:59:53.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.104:22-10.0.0.1:50208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:53.989366 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 00:59:53.994841 systemd-logind[1585]: Session 20 logged out. Waiting for processes to exit. Jan 24 00:59:53.998273 systemd-logind[1585]: Removed session 20. Jan 24 00:59:54.972676 kubelet[2883]: E0124 00:59:54.970952 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 00:59:59.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.104:22-10.0.0.1:50218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:59.007788 systemd[1]: Started sshd@19-10.0.0.104:22-10.0.0.1:50218.service - OpenSSH per-connection server daemon (10.0.0.1:50218). Jan 24 00:59:59.013556 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 24 00:59:59.013794 kernel: audit: type=1130 audit(1769216399.007:864): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.104:22-10.0.0.1:50218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:59.238859 sshd[5620]: Accepted publickey for core from 10.0.0.1 port 50218 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 00:59:59.237000 audit[5620]: USER_ACCT pid=5620 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:59.245688 sshd-session[5620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:59.263827 kernel: audit: type=1101 audit(1769216399.237:865): pid=5620 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:59.263969 kernel: audit: type=1103 audit(1769216399.242:866): pid=5620 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:59.242000 audit[5620]: CRED_ACQ pid=5620 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:59.268946 systemd-logind[1585]: New session 21 of user core. Jan 24 00:59:59.284803 kernel: audit: type=1006 audit(1769216399.242:867): pid=5620 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 24 00:59:59.303409 kernel: audit: type=1300 audit(1769216399.242:867): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0a8bc4c0 a2=3 a3=0 items=0 ppid=1 pid=5620 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:59.242000 audit[5620]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0a8bc4c0 a2=3 a3=0 items=0 ppid=1 pid=5620 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:59.242000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:59.305159 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 00:59:59.337297 kernel: audit: type=1327 audit(1769216399.242:867): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:59.339903 kernel: audit: type=1105 audit(1769216399.314:868): pid=5620 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:59.314000 audit[5620]: USER_START pid=5620 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:59.322000 audit[5624]: CRED_ACQ pid=5624 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:59.360823 kernel: audit: type=1103 audit(1769216399.322:869): pid=5624 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:59.520117 sshd[5624]: Connection closed by 10.0.0.1 port 50218 Jan 24 00:59:59.523010 sshd-session[5620]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:59.528000 audit[5620]: USER_END pid=5620 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:59.537521 systemd[1]: sshd@19-10.0.0.104:22-10.0.0.1:50218.service: Deactivated successfully. Jan 24 00:59:59.544000 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 00:59:59.547690 systemd-logind[1585]: Session 21 logged out. Waiting for processes to exit. Jan 24 00:59:59.550851 systemd-logind[1585]: Removed session 21. Jan 24 00:59:59.561699 kernel: audit: type=1106 audit(1769216399.528:870): pid=5620 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:59.561927 kernel: audit: type=1104 audit(1769216399.528:871): pid=5620 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:59.528000 audit[5620]: CRED_DISP pid=5620 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:59.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.104:22-10.0.0.1:50218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:59.984979 kubelet[2883]: E0124 00:59:59.984363 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 01:00:00.969885 kubelet[2883]: E0124 01:00:00.969813 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 01:00:01.979816 kubelet[2883]: E0124 01:00:01.975478 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:01.989911 kubelet[2883]: E0124 01:00:01.988397 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691" Jan 24 01:00:02.000015 kubelet[2883]: E0124 01:00:01.994963 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 01:00:04.571577 systemd[1]: Started sshd@20-10.0.0.104:22-10.0.0.1:32768.service - OpenSSH per-connection server daemon (10.0.0.1:32768). Jan 24 01:00:04.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.104:22-10.0.0.1:32768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:04.582810 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:00:04.582929 kernel: audit: type=1130 audit(1769216404.571:873): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.104:22-10.0.0.1:32768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:04.814453 sshd[5639]: Accepted publickey for core from 10.0.0.1 port 32768 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:00:04.813000 audit[5639]: USER_ACCT pid=5639 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:04.820684 sshd-session[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:04.859817 kernel: audit: type=1101 audit(1769216404.813:874): pid=5639 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:04.859911 kernel: audit: type=1103 audit(1769216404.817:875): pid=5639 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:04.817000 audit[5639]: CRED_ACQ pid=5639 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:04.874838 systemd-logind[1585]: New session 22 of user core. Jan 24 01:00:04.915830 kernel: audit: type=1006 audit(1769216404.817:876): pid=5639 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 24 01:00:04.916216 kernel: audit: type=1300 audit(1769216404.817:876): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9cafdc10 a2=3 a3=0 items=0 ppid=1 pid=5639 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:04.817000 audit[5639]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9cafdc10 a2=3 a3=0 items=0 ppid=1 pid=5639 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:04.968580 kubelet[2883]: E0124 01:00:04.968536 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 01:00:04.983386 kernel: audit: type=1327 audit(1769216404.817:876): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:04.817000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:04.982162 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 01:00:04.998000 audit[5639]: USER_START pid=5639 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:05.064776 kernel: audit: type=1105 audit(1769216404.998:877): pid=5639 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:05.013000 audit[5643]: CRED_ACQ pid=5643 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:05.114813 kernel: audit: type=1103 audit(1769216405.013:878): pid=5643 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:05.478355 sshd[5643]: Connection closed by 10.0.0.1 port 32768 Jan 24 01:00:05.478881 sshd-session[5639]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:05.483000 audit[5639]: USER_END pid=5639 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:05.490568 systemd-logind[1585]: Session 22 logged out. Waiting for processes to exit. Jan 24 01:00:05.493450 systemd[1]: sshd@20-10.0.0.104:22-10.0.0.1:32768.service: Deactivated successfully. Jan 24 01:00:05.502970 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 01:00:05.513822 kernel: audit: type=1106 audit(1769216405.483:879): pid=5639 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:05.513928 kernel: audit: type=1104 audit(1769216405.483:880): pid=5639 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:05.483000 audit[5639]: CRED_DISP pid=5639 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:05.515079 systemd-logind[1585]: Removed session 22. Jan 24 01:00:05.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.104:22-10.0.0.1:32768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:05.974834 kubelet[2883]: E0124 01:00:05.974237 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:05.978308 kubelet[2883]: E0124 01:00:05.977566 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 01:00:10.581601 systemd[1]: Started sshd@21-10.0.0.104:22-10.0.0.1:32774.service - OpenSSH per-connection server daemon (10.0.0.1:32774). Jan 24 01:00:10.627084 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:00:10.627260 kernel: audit: type=1130 audit(1769216410.581:882): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.104:22-10.0.0.1:32774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:10.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.104:22-10.0.0.1:32774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:10.964000 audit[5664]: USER_ACCT pid=5664 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:10.986099 sshd-session[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:11.037587 kernel: audit: type=1101 audit(1769216410.964:883): pid=5664 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:11.037956 sshd[5664]: Accepted publickey for core from 10.0.0.1 port 32774 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:00:11.038304 kubelet[2883]: E0124 01:00:11.026141 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 01:00:10.973000 audit[5664]: CRED_ACQ pid=5664 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:11.080230 systemd-logind[1585]: New session 23 of user core. Jan 24 01:00:11.115433 kernel: audit: type=1103 audit(1769216410.973:884): pid=5664 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:11.118254 kernel: audit: type=1006 audit(1769216410.973:885): pid=5664 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 24 01:00:10.973000 audit[5664]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2b576950 a2=3 a3=0 items=0 ppid=1 pid=5664 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:11.176186 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 01:00:11.261926 kernel: audit: type=1300 audit(1769216410.973:885): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2b576950 a2=3 a3=0 items=0 ppid=1 pid=5664 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:10.973000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:11.292168 kernel: audit: type=1327 audit(1769216410.973:885): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:11.185000 audit[5664]: USER_START pid=5664 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:11.195000 audit[5668]: CRED_ACQ pid=5668 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:11.508056 kernel: audit: type=1105 audit(1769216411.185:886): pid=5664 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:11.508244 kernel: audit: type=1103 audit(1769216411.195:887): pid=5668 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:11.966216 kubelet[2883]: E0124 01:00:11.966118 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:12.054375 sshd[5668]: Connection closed by 10.0.0.1 port 32774 Jan 24 01:00:12.062428 sshd-session[5664]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:12.099000 audit[5664]: USER_END pid=5664 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:12.170039 kernel: audit: type=1106 audit(1769216412.099:888): pid=5664 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:12.235593 kernel: audit: type=1104 audit(1769216412.099:889): pid=5664 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:12.099000 audit[5664]: CRED_DISP pid=5664 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:12.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.104:22-10.0.0.1:32774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:12.189150 systemd[1]: sshd@21-10.0.0.104:22-10.0.0.1:32774.service: Deactivated successfully. Jan 24 01:00:12.193168 systemd-logind[1585]: Session 23 logged out. Waiting for processes to exit. Jan 24 01:00:12.203023 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 01:00:12.216845 systemd-logind[1585]: Removed session 23. Jan 24 01:00:12.985064 kubelet[2883]: E0124 01:00:12.982005 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691" Jan 24 01:00:12.996524 kubelet[2883]: E0124 01:00:12.993631 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 01:00:13.981205 kubelet[2883]: E0124 01:00:13.981145 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 01:00:15.975567 kubelet[2883]: E0124 01:00:15.975438 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:17.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.104:22-10.0.0.1:58244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:17.105806 systemd[1]: Started sshd@22-10.0.0.104:22-10.0.0.1:58244.service - OpenSSH per-connection server daemon (10.0.0.1:58244). Jan 24 01:00:17.116882 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:00:17.116985 kernel: audit: type=1130 audit(1769216417.105:891): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.104:22-10.0.0.1:58244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:17.407000 audit[5685]: USER_ACCT pid=5685 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:17.442054 kernel: audit: type=1101 audit(1769216417.407:892): pid=5685 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:17.442151 sshd[5685]: Accepted publickey for core from 10.0.0.1 port 58244 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:00:17.435000 audit[5685]: CRED_ACQ pid=5685 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:17.449018 sshd-session[5685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:17.478497 kernel: audit: type=1103 audit(1769216417.435:893): pid=5685 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:17.478598 kernel: audit: type=1006 audit(1769216417.435:894): pid=5685 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 24 01:00:17.478031 systemd-logind[1585]: New session 24 of user core. Jan 24 01:00:17.435000 audit[5685]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff96c4a910 a2=3 a3=0 items=0 ppid=1 pid=5685 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:17.517878 kernel: audit: type=1300 audit(1769216417.435:894): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff96c4a910 a2=3 a3=0 items=0 ppid=1 pid=5685 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:17.521888 kernel: audit: type=1327 audit(1769216417.435:894): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:17.435000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:17.527134 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 24 01:00:17.548000 audit[5685]: USER_START pid=5685 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:17.588911 kernel: audit: type=1105 audit(1769216417.548:895): pid=5685 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:17.604000 audit[5689]: CRED_ACQ pid=5689 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:17.642632 kernel: audit: type=1103 audit(1769216417.604:896): pid=5689 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:18.055968 sshd[5689]: Connection closed by 10.0.0.1 port 58244 Jan 24 01:00:18.059620 sshd-session[5685]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:18.064000 audit[5685]: USER_END pid=5685 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:18.116193 kernel: audit: type=1106 audit(1769216418.064:897): pid=5685 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:18.114194 systemd[1]: sshd@22-10.0.0.104:22-10.0.0.1:58244.service: Deactivated successfully. Jan 24 01:00:18.123530 systemd[1]: session-24.scope: Deactivated successfully. Jan 24 01:00:18.134438 systemd-logind[1585]: Session 24 logged out. Waiting for processes to exit. Jan 24 01:00:18.071000 audit[5685]: CRED_DISP pid=5685 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:18.175041 systemd-logind[1585]: Removed session 24. Jan 24 01:00:18.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.104:22-10.0.0.1:58244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:18.182804 kernel: audit: type=1104 audit(1769216418.071:898): pid=5685 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:18.972610 kubelet[2883]: E0124 01:00:18.972134 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 01:00:19.992343 containerd[1609]: time="2026-01-24T01:00:19.991966241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 01:00:20.144320 containerd[1609]: time="2026-01-24T01:00:20.144075829Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 01:00:20.149424 containerd[1609]: time="2026-01-24T01:00:20.149369486Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 01:00:20.151174 kubelet[2883]: E0124 01:00:20.150188 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 01:00:20.151174 kubelet[2883]: E0124 01:00:20.150289 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 01:00:20.151174 kubelet[2883]: E0124 01:00:20.150481 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lxh5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2256s_calico-system(9fdbb8ee-a6f4-499c-b584-8b75c3240604): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 01:00:20.152387 containerd[1609]: time="2026-01-24T01:00:20.151131734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 01:00:20.152535 kubelet[2883]: E0124 01:00:20.152503 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 01:00:23.160112 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:00:23.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.104:22-10.0.0.1:56664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:23.147243 systemd[1]: Started sshd@23-10.0.0.104:22-10.0.0.1:56664.service - OpenSSH per-connection server daemon (10.0.0.1:56664). Jan 24 01:00:23.203339 kernel: audit: type=1130 audit(1769216423.146:900): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.104:22-10.0.0.1:56664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:23.531000 audit[5703]: USER_ACCT pid=5703 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:23.538972 sshd[5703]: Accepted publickey for core from 10.0.0.1 port 56664 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:00:23.579985 kernel: audit: type=1101 audit(1769216423.531:901): pid=5703 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:23.580137 kernel: audit: type=1103 audit(1769216423.544:902): pid=5703 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:23.544000 audit[5703]: CRED_ACQ pid=5703 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:23.552355 sshd-session[5703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:23.608881 kernel: audit: type=1006 audit(1769216423.544:903): pid=5703 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 24 01:00:23.544000 audit[5703]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe650f95c0 a2=3 a3=0 items=0 ppid=1 pid=5703 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:23.631085 systemd-logind[1585]: New session 25 of user core. Jan 24 01:00:23.669839 kernel: audit: type=1300 audit(1769216423.544:903): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe650f95c0 a2=3 a3=0 items=0 ppid=1 pid=5703 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:23.544000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:23.679904 kernel: audit: type=1327 audit(1769216423.544:903): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:23.698632 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 24 01:00:23.717000 audit[5703]: USER_START pid=5703 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:23.781979 kernel: audit: type=1105 audit(1769216423.717:904): pid=5703 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:23.782085 kernel: audit: type=1103 audit(1769216423.727:905): pid=5724 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:23.727000 audit[5724]: CRED_ACQ pid=5724 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:24.015019 containerd[1609]: time="2026-01-24T01:00:24.013919013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 01:00:24.274475 containerd[1609]: time="2026-01-24T01:00:24.274018036Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 01:00:24.294200 containerd[1609]: time="2026-01-24T01:00:24.294063772Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 01:00:24.295063 containerd[1609]: time="2026-01-24T01:00:24.294164515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 01:00:24.295628 kubelet[2883]: E0124 01:00:24.295275 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 01:00:24.301650 kubelet[2883]: E0124 01:00:24.295668 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 01:00:24.301650 kubelet[2883]: E0124 01:00:24.295944 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mh58h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rkd9m_calico-system(e6e0379d-4209-43c1-9c94-53533c368367): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 01:00:24.305458 containerd[1609]: time="2026-01-24T01:00:24.304508681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 01:00:24.345414 sshd[5724]: Connection closed by 10.0.0.1 port 56664 Jan 24 01:00:24.346398 sshd-session[5703]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:24.358000 audit[5703]: USER_END pid=5703 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:24.375379 systemd[1]: sshd@23-10.0.0.104:22-10.0.0.1:56664.service: Deactivated successfully. Jan 24 01:00:24.399805 containerd[1609]: time="2026-01-24T01:00:24.399560737Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 01:00:24.409467 systemd[1]: session-25.scope: Deactivated successfully. Jan 24 01:00:24.417443 containerd[1609]: time="2026-01-24T01:00:24.417215028Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 01:00:24.417443 containerd[1609]: time="2026-01-24T01:00:24.417346093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 01:00:24.420937 kubelet[2883]: E0124 01:00:24.420357 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 01:00:24.420937 kubelet[2883]: E0124 01:00:24.420465 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 01:00:24.422440 kubelet[2883]: E0124 01:00:24.420655 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mh58h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rkd9m_calico-system(e6e0379d-4209-43c1-9c94-53533c368367): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 01:00:24.422841 kernel: audit: type=1106 audit(1769216424.358:906): pid=5703 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:24.424548 kubelet[2883]: E0124 01:00:24.423368 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 01:00:24.358000 audit[5703]: CRED_DISP pid=5703 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:24.447632 systemd-logind[1585]: Session 25 logged out. Waiting for processes to exit. Jan 24 01:00:24.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.104:22-10.0.0.1:56664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:24.456590 kernel: audit: type=1104 audit(1769216424.358:907): pid=5703 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:24.468179 systemd-logind[1585]: Removed session 25. Jan 24 01:00:25.970562 kubelet[2883]: E0124 01:00:25.970445 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 01:00:25.980060 kubelet[2883]: E0124 01:00:25.979497 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 01:00:27.172278 containerd[1609]: time="2026-01-24T01:00:27.161859524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 01:00:27.586038 containerd[1609]: time="2026-01-24T01:00:27.585321715Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 01:00:27.600253 containerd[1609]: time="2026-01-24T01:00:27.599902628Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 01:00:27.600253 containerd[1609]: time="2026-01-24T01:00:27.600155550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 01:00:27.602665 kubelet[2883]: E0124 01:00:27.600479 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 01:00:27.602665 kubelet[2883]: E0124 01:00:27.600589 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 01:00:27.602665 kubelet[2883]: E0124 01:00:27.600962 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e3b1510d29974db1a1191d4e38011034,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xv92s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58d88bd994-v27xr_calico-system(ae809202-0be0-4f65-b3c1-0018455a5691): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 01:00:27.605040 containerd[1609]: time="2026-01-24T01:00:27.605012305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 01:00:27.729020 containerd[1609]: time="2026-01-24T01:00:27.728222486Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 01:00:27.774102 containerd[1609]: time="2026-01-24T01:00:27.768996914Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 01:00:27.774102 containerd[1609]: time="2026-01-24T01:00:27.769155510Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 01:00:27.774345 kubelet[2883]: E0124 01:00:27.769314 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 01:00:27.774345 kubelet[2883]: E0124 01:00:27.769374 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 01:00:27.774345 kubelet[2883]: E0124 01:00:27.769508 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xv92s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58d88bd994-v27xr_calico-system(ae809202-0be0-4f65-b3c1-0018455a5691): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 01:00:27.776899 kubelet[2883]: E0124 01:00:27.775034 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691" Jan 24 01:00:29.407263 systemd[1]: Started sshd@24-10.0.0.104:22-10.0.0.1:56672.service - OpenSSH per-connection server daemon (10.0.0.1:56672). Jan 24 01:00:29.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.104:22-10.0.0.1:56672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:29.436924 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:00:29.437072 kernel: audit: type=1130 audit(1769216429.407:909): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.104:22-10.0.0.1:56672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:30.341000 audit[5750]: USER_ACCT pid=5750 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:30.390403 kernel: audit: type=1101 audit(1769216430.341:910): pid=5750 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:30.390489 kernel: audit: type=1103 audit(1769216430.385:911): pid=5750 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:30.385000 audit[5750]: CRED_ACQ pid=5750 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:30.390612 sshd[5750]: Accepted publickey for core from 10.0.0.1 port 56672 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:00:30.406582 sshd-session[5750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:30.385000 audit[5750]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffccda8e3d0 a2=3 a3=0 items=0 ppid=1 pid=5750 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:30.475935 systemd-logind[1585]: New session 26 of user core. Jan 24 01:00:30.525579 kernel: audit: type=1006 audit(1769216430.385:912): pid=5750 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 24 01:00:30.592053 kernel: audit: type=1300 audit(1769216430.385:912): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffccda8e3d0 a2=3 a3=0 items=0 ppid=1 pid=5750 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:30.599146 kernel: audit: type=1327 audit(1769216430.385:912): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:30.385000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:30.587239 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 24 01:00:30.630000 audit[5750]: USER_START pid=5750 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:30.705012 kernel: audit: type=1105 audit(1769216430.630:913): pid=5750 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:30.665000 audit[5754]: CRED_ACQ pid=5754 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:30.790624 kernel: audit: type=1103 audit(1769216430.665:914): pid=5754 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:30.987426 containerd[1609]: time="2026-01-24T01:00:30.986521943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 01:00:31.010422 kubelet[2883]: E0124 01:00:31.010370 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 01:00:31.487124 containerd[1609]: time="2026-01-24T01:00:31.487067248Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 01:00:31.502081 containerd[1609]: time="2026-01-24T01:00:31.502012292Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 01:00:31.507558 containerd[1609]: time="2026-01-24T01:00:31.502163697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 01:00:31.508429 kubelet[2883]: E0124 01:00:31.502566 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 01:00:31.508429 kubelet[2883]: E0124 01:00:31.502627 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 01:00:31.508429 kubelet[2883]: E0124 01:00:31.503128 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f86n6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5985c58466-tkqwx_calico-apiserver(5a9025b1-4c1d-4d71-8add-e1566c4e04cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 01:00:31.508429 kubelet[2883]: E0124 01:00:31.505571 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 01:00:31.614866 sshd[5754]: Connection closed by 10.0.0.1 port 56672 Jan 24 01:00:31.616384 sshd-session[5750]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:31.623000 audit[5750]: USER_END pid=5750 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:31.669831 systemd[1]: sshd@24-10.0.0.104:22-10.0.0.1:56672.service: Deactivated successfully. Jan 24 01:00:31.677515 kernel: audit: type=1106 audit(1769216431.623:915): pid=5750 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:31.678426 kernel: audit: type=1104 audit(1769216431.644:916): pid=5750 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:31.644000 audit[5750]: CRED_DISP pid=5750 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:31.679690 systemd[1]: session-26.scope: Deactivated successfully. Jan 24 01:00:31.700645 systemd-logind[1585]: Session 26 logged out. Waiting for processes to exit. Jan 24 01:00:31.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.104:22-10.0.0.1:56672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:31.711269 systemd-logind[1585]: Removed session 26. Jan 24 01:00:34.051595 kubelet[2883]: E0124 01:00:34.033900 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:36.762863 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:00:36.763007 kernel: audit: type=1130 audit(1769216436.725:918): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.104:22-10.0.0.1:55978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:36.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.104:22-10.0.0.1:55978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:36.726120 systemd[1]: Started sshd@25-10.0.0.104:22-10.0.0.1:55978.service - OpenSSH per-connection server daemon (10.0.0.1:55978). Jan 24 01:00:37.007017 containerd[1609]: time="2026-01-24T01:00:37.006930051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 01:00:37.324305 containerd[1609]: time="2026-01-24T01:00:37.321562733Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 01:00:37.357571 sshd[5781]: Accepted publickey for core from 10.0.0.1 port 55978 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:00:37.356000 audit[5781]: USER_ACCT pid=5781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:37.371437 sshd-session[5781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:37.376623 containerd[1609]: time="2026-01-24T01:00:37.376557861Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 01:00:37.377504 containerd[1609]: time="2026-01-24T01:00:37.376994105Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 01:00:37.378049 kubelet[2883]: E0124 01:00:37.378001 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 01:00:37.380941 kubelet[2883]: E0124 01:00:37.380903 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 01:00:37.383502 kubelet[2883]: E0124 01:00:37.383432 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8624g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5b9dc86db-sl4tg_calico-system(70bde68b-f37d-4bad-bf48-1635753f011a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 01:00:37.389419 kubelet[2883]: E0124 01:00:37.387036 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 01:00:37.463681 kernel: audit: type=1101 audit(1769216437.356:919): pid=5781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:37.463926 kernel: audit: type=1103 audit(1769216437.365:920): pid=5781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:37.365000 audit[5781]: CRED_ACQ pid=5781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:37.483265 systemd-logind[1585]: New session 27 of user core. Jan 24 01:00:37.600038 kernel: audit: type=1006 audit(1769216437.365:921): pid=5781 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 24 01:00:37.628142 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 24 01:00:37.365000 audit[5781]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff942d9920 a2=3 a3=0 items=0 ppid=1 pid=5781 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:37.730878 kernel: audit: type=1300 audit(1769216437.365:921): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff942d9920 a2=3 a3=0 items=0 ppid=1 pid=5781 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:37.731023 kernel: audit: type=1327 audit(1769216437.365:921): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:37.365000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:37.701000 audit[5781]: USER_START pid=5781 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:37.733000 audit[5785]: CRED_ACQ pid=5785 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:37.822394 kernel: audit: type=1105 audit(1769216437.701:922): pid=5781 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:37.822938 kernel: audit: type=1103 audit(1769216437.733:923): pid=5785 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:37.991495 containerd[1609]: time="2026-01-24T01:00:37.988013411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 01:00:38.039274 kubelet[2883]: E0124 01:00:38.039126 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 01:00:38.117985 containerd[1609]: time="2026-01-24T01:00:38.115486435Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 01:00:38.123865 containerd[1609]: time="2026-01-24T01:00:38.122963722Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 01:00:38.124135 containerd[1609]: time="2026-01-24T01:00:38.124102357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 01:00:38.124878 kubelet[2883]: E0124 01:00:38.124676 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 01:00:38.124878 kubelet[2883]: E0124 01:00:38.124857 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 01:00:38.125338 kubelet[2883]: E0124 01:00:38.125196 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s5w2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5985c58466-q852p_calico-apiserver(6f07ac71-f9bf-4f16-8022-eeee9f625fbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 01:00:38.126860 kubelet[2883]: E0124 01:00:38.126476 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 01:00:38.255690 sshd[5785]: Connection closed by 10.0.0.1 port 55978 Jan 24 01:00:38.258067 sshd-session[5781]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:38.269000 audit[5781]: USER_END pid=5781 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:38.302667 systemd[1]: sshd@25-10.0.0.104:22-10.0.0.1:55978.service: Deactivated successfully. Jan 24 01:00:38.315130 systemd[1]: session-27.scope: Deactivated successfully. Jan 24 01:00:38.329898 systemd-logind[1585]: Session 27 logged out. Waiting for processes to exit. Jan 24 01:00:38.342516 kernel: audit: type=1106 audit(1769216438.269:924): pid=5781 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:38.342582 kernel: audit: type=1104 audit(1769216438.274:925): pid=5781 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:38.274000 audit[5781]: CRED_DISP pid=5781 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:38.345556 systemd-logind[1585]: Removed session 27. Jan 24 01:00:38.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.104:22-10.0.0.1:55978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:40.828205 kubelet[2883]: E0124 01:00:40.788871 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691" Jan 24 01:00:41.998192 kubelet[2883]: E0124 01:00:41.998132 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 01:00:42.018338 kubelet[2883]: E0124 01:00:42.018288 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 01:00:43.381413 systemd[1]: Started sshd@26-10.0.0.104:22-10.0.0.1:44652.service - OpenSSH per-connection server daemon (10.0.0.1:44652). Jan 24 01:00:43.460818 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:00:43.460939 kernel: audit: type=1130 audit(1769216443.380:927): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.104:22-10.0.0.1:44652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:43.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.104:22-10.0.0.1:44652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:43.848595 sshd[5807]: Accepted publickey for core from 10.0.0.1 port 44652 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:00:43.846000 audit[5807]: USER_ACCT pid=5807 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:43.884082 sshd-session[5807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:43.898025 kernel: audit: type=1101 audit(1769216443.846:928): pid=5807 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:43.899014 kernel: audit: type=1103 audit(1769216443.858:929): pid=5807 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:43.858000 audit[5807]: CRED_ACQ pid=5807 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:43.907991 systemd-logind[1585]: New session 28 of user core. Jan 24 01:00:43.942085 kernel: audit: type=1006 audit(1769216443.858:930): pid=5807 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 24 01:00:43.969285 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 24 01:00:43.858000 audit[5807]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd9fb07960 a2=3 a3=0 items=0 ppid=1 pid=5807 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:44.107469 kernel: audit: type=1300 audit(1769216443.858:930): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd9fb07960 a2=3 a3=0 items=0 ppid=1 pid=5807 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:44.117849 kernel: audit: type=1327 audit(1769216443.858:930): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:43.858000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:44.009000 audit[5807]: USER_START pid=5807 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:44.181932 kernel: audit: type=1105 audit(1769216444.009:931): pid=5807 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:44.182090 kernel: audit: type=1103 audit(1769216444.025:932): pid=5811 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:44.025000 audit[5811]: CRED_ACQ pid=5811 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:44.926900 sshd[5811]: Connection closed by 10.0.0.1 port 44652 Jan 24 01:00:44.929880 sshd-session[5807]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:44.936000 audit[5807]: USER_END pid=5807 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:44.958023 systemd-logind[1585]: Session 28 logged out. Waiting for processes to exit. Jan 24 01:00:44.974133 kernel: audit: type=1106 audit(1769216444.936:933): pid=5807 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:44.973388 systemd[1]: sshd@26-10.0.0.104:22-10.0.0.1:44652.service: Deactivated successfully. Jan 24 01:00:45.043083 kernel: audit: type=1104 audit(1769216444.936:934): pid=5807 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:44.936000 audit[5807]: CRED_DISP pid=5807 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:44.992297 systemd[1]: session-28.scope: Deactivated successfully. Jan 24 01:00:45.031592 systemd-logind[1585]: Removed session 28. Jan 24 01:00:44.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.104:22-10.0.0.1:44652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:45.124910 containerd[1609]: time="2026-01-24T01:00:45.058890046Z" level=info msg="container event discarded" container=44e62118ae4aecd55a1b82640a42ce54dbd33ea9093d70e9f7bcacb9ea79c9cf type=CONTAINER_CREATED_EVENT Jan 24 01:00:45.128118 containerd[1609]: time="2026-01-24T01:00:45.127924446Z" level=info msg="container event discarded" container=44e62118ae4aecd55a1b82640a42ce54dbd33ea9093d70e9f7bcacb9ea79c9cf type=CONTAINER_STARTED_EVENT Jan 24 01:00:45.560950 containerd[1609]: time="2026-01-24T01:00:45.559950398Z" level=info msg="container event discarded" container=32d5a12104927d5cf025be99b7bd01c6573af724050f080b103b33ab4bbe21d3 type=CONTAINER_CREATED_EVENT Jan 24 01:00:45.560950 containerd[1609]: time="2026-01-24T01:00:45.560004539Z" level=info msg="container event discarded" container=026a541896235271be740f0b04dfe9c3c16f4a76b530e6a9b7b3c0b7d270bdaa type=CONTAINER_CREATED_EVENT Jan 24 01:00:45.560950 containerd[1609]: time="2026-01-24T01:00:45.560016141Z" level=info msg="container event discarded" container=026a541896235271be740f0b04dfe9c3c16f4a76b530e6a9b7b3c0b7d270bdaa type=CONTAINER_STARTED_EVENT Jan 24 01:00:45.619329 containerd[1609]: time="2026-01-24T01:00:45.614439236Z" level=info msg="container event discarded" container=5160a59a44040cfedb516304a34b98c6afb3428cfc9d56a2d187adad72e9952b type=CONTAINER_CREATED_EVENT Jan 24 01:00:45.619329 containerd[1609]: time="2026-01-24T01:00:45.614914874Z" level=info msg="container event discarded" container=5160a59a44040cfedb516304a34b98c6afb3428cfc9d56a2d187adad72e9952b type=CONTAINER_STARTED_EVENT Jan 24 01:00:45.674421 containerd[1609]: time="2026-01-24T01:00:45.673904915Z" level=info msg="container event discarded" container=31d82bb0fabfefcb8fb396dd27f350cf8d5552ede80309c9b956f7566a1820b6 type=CONTAINER_CREATED_EVENT Jan 24 01:00:45.729535 containerd[1609]: time="2026-01-24T01:00:45.728897610Z" level=info msg="container event discarded" container=6c8d0809bc53d51e6bca0444859e5fe08308b31485470288d257cec26f40e1c1 type=CONTAINER_CREATED_EVENT Jan 24 01:00:46.478654 containerd[1609]: time="2026-01-24T01:00:46.478537346Z" level=info msg="container event discarded" container=32d5a12104927d5cf025be99b7bd01c6573af724050f080b103b33ab4bbe21d3 type=CONTAINER_STARTED_EVENT Jan 24 01:00:46.605215 containerd[1609]: time="2026-01-24T01:00:46.605102434Z" level=info msg="container event discarded" container=6c8d0809bc53d51e6bca0444859e5fe08308b31485470288d257cec26f40e1c1 type=CONTAINER_STARTED_EVENT Jan 24 01:00:46.698559 containerd[1609]: time="2026-01-24T01:00:46.698499483Z" level=info msg="container event discarded" container=31d82bb0fabfefcb8fb396dd27f350cf8d5552ede80309c9b956f7566a1820b6 type=CONTAINER_STARTED_EVENT Jan 24 01:00:48.661018 kubelet[2883]: E0124 01:00:48.659584 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 01:00:49.023055 kubelet[2883]: E0124 01:00:49.008554 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 01:00:49.026645 kubelet[2883]: E0124 01:00:49.026341 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 01:00:50.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.104:22-10.0.0.1:44662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:50.362446 systemd[1]: Started sshd@27-10.0.0.104:22-10.0.0.1:44662.service - OpenSSH per-connection server daemon (10.0.0.1:44662). Jan 24 01:00:50.384974 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:00:50.385074 kernel: audit: type=1130 audit(1769216450.361:936): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.104:22-10.0.0.1:44662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:51.162000 audit[5825]: USER_ACCT pid=5825 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:51.214991 sshd-session[5825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:51.266156 kernel: audit: type=1101 audit(1769216451.162:937): pid=5825 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:51.266212 sshd[5825]: Accepted publickey for core from 10.0.0.1 port 44662 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:00:51.198000 audit[5825]: CRED_ACQ pid=5825 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:51.292969 kernel: audit: type=1103 audit(1769216451.198:938): pid=5825 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:51.335011 kernel: audit: type=1006 audit(1769216451.200:939): pid=5825 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 24 01:00:51.335200 kernel: audit: type=1300 audit(1769216451.200:939): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe1d1ce790 a2=3 a3=0 items=0 ppid=1 pid=5825 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:51.200000 audit[5825]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe1d1ce790 a2=3 a3=0 items=0 ppid=1 pid=5825 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:51.354381 kernel: audit: type=1327 audit(1769216451.200:939): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:51.200000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:51.357674 systemd-logind[1585]: New session 29 of user core. Jan 24 01:00:51.411860 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 24 01:00:51.465000 audit[5825]: USER_START pid=5825 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:51.539336 kernel: audit: type=1105 audit(1769216451.465:940): pid=5825 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:51.480000 audit[5829]: CRED_ACQ pid=5829 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:51.605942 kernel: audit: type=1103 audit(1769216451.480:941): pid=5829 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:52.236515 sshd[5829]: Connection closed by 10.0.0.1 port 44662 Jan 24 01:00:52.237470 sshd-session[5825]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:52.241000 audit[5825]: USER_END pid=5825 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:52.252641 systemd-logind[1585]: Session 29 logged out. Waiting for processes to exit. Jan 24 01:00:52.278289 kernel: audit: type=1106 audit(1769216452.241:942): pid=5825 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:52.260684 systemd[1]: sshd@27-10.0.0.104:22-10.0.0.1:44662.service: Deactivated successfully. Jan 24 01:00:52.274269 systemd[1]: session-29.scope: Deactivated successfully. Jan 24 01:00:52.241000 audit[5825]: CRED_DISP pid=5825 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:52.289386 systemd-logind[1585]: Removed session 29. Jan 24 01:00:52.315388 kernel: audit: type=1104 audit(1769216452.241:943): pid=5825 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:52.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.104:22-10.0.0.1:44662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:52.975921 kubelet[2883]: E0124 01:00:52.974991 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 01:00:54.147677 kubelet[2883]: E0124 01:00:54.109834 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691" Jan 24 01:00:55.571112 kubelet[2883]: E0124 01:00:55.568860 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:55.590988 kubelet[2883]: E0124 01:00:55.572574 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:55.621089 kubelet[2883]: E0124 01:00:55.620287 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 01:00:57.486979 systemd[1]: Started sshd@28-10.0.0.104:22-10.0.0.1:40234.service - OpenSSH per-connection server daemon (10.0.0.1:40234). Jan 24 01:00:57.525865 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:00:57.526070 kernel: audit: type=1130 audit(1769216457.494:945): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.104:22-10.0.0.1:40234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:57.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.104:22-10.0.0.1:40234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:57.758000 audit[5870]: USER_ACCT pid=5870 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:57.768214 sshd[5870]: Accepted publickey for core from 10.0.0.1 port 40234 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:00:57.804953 kernel: audit: type=1101 audit(1769216457.758:946): pid=5870 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:57.805283 kernel: audit: type=1103 audit(1769216457.772:947): pid=5870 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:57.772000 audit[5870]: CRED_ACQ pid=5870 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:57.795297 sshd-session[5870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:57.817192 systemd-logind[1585]: New session 30 of user core. Jan 24 01:00:58.095208 kernel: audit: type=1006 audit(1769216457.772:948): pid=5870 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 24 01:00:58.099526 kernel: audit: type=1300 audit(1769216457.772:948): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce12af8a0 a2=3 a3=0 items=0 ppid=1 pid=5870 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:58.116639 kernel: audit: type=1327 audit(1769216457.772:948): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:57.772000 audit[5870]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce12af8a0 a2=3 a3=0 items=0 ppid=1 pid=5870 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:57.772000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:58.054238 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 24 01:00:58.331275 kernel: audit: type=1105 audit(1769216458.280:949): pid=5870 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:58.280000 audit[5870]: USER_START pid=5870 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:58.330000 audit[5874]: CRED_ACQ pid=5874 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:58.372381 kernel: audit: type=1103 audit(1769216458.330:950): pid=5874 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:59.026963 sshd[5874]: Connection closed by 10.0.0.1 port 40234 Jan 24 01:00:59.023538 sshd-session[5870]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:59.035000 audit[5870]: USER_END pid=5870 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:59.053964 systemd-logind[1585]: Session 30 logged out. Waiting for processes to exit. Jan 24 01:00:59.061666 systemd[1]: sshd@28-10.0.0.104:22-10.0.0.1:40234.service: Deactivated successfully. Jan 24 01:00:59.093385 kernel: audit: type=1106 audit(1769216459.035:951): pid=5870 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:59.035000 audit[5870]: CRED_DISP pid=5870 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:59.123381 systemd[1]: session-30.scope: Deactivated successfully. Jan 24 01:00:59.150446 kernel: audit: type=1104 audit(1769216459.035:952): pid=5870 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:59.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.104:22-10.0.0.1:40234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:59.227609 systemd-logind[1585]: Removed session 30. Jan 24 01:01:01.816132 kubelet[2883]: E0124 01:01:01.813268 2883 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.763s" Jan 24 01:01:01.829597 kubelet[2883]: E0124 01:01:01.829107 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 01:01:01.892008 kubelet[2883]: E0124 01:01:01.891187 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 01:01:01.982593 kubelet[2883]: E0124 01:01:01.982252 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 01:01:04.113574 systemd[1]: Started sshd@29-10.0.0.104:22-10.0.0.1:49096.service - OpenSSH per-connection server daemon (10.0.0.1:49096). Jan 24 01:01:04.136046 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:01:04.136176 kernel: audit: type=1130 audit(1769216464.121:954): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.104:22-10.0.0.1:49096 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:04.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.104:22-10.0.0.1:49096 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:04.454000 audit[5891]: USER_ACCT pid=5891 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:04.457667 sshd[5891]: Accepted publickey for core from 10.0.0.1 port 49096 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:01:04.466692 sshd-session[5891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:01:04.496946 kernel: audit: type=1101 audit(1769216464.454:955): pid=5891 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:04.497052 kernel: audit: type=1103 audit(1769216464.462:956): pid=5891 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:04.462000 audit[5891]: CRED_ACQ pid=5891 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:04.508694 kernel: audit: type=1006 audit(1769216464.462:957): pid=5891 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jan 24 01:01:04.504518 systemd-logind[1585]: New session 31 of user core. Jan 24 01:01:04.462000 audit[5891]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6662ab90 a2=3 a3=0 items=0 ppid=1 pid=5891 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:04.462000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:04.533480 kernel: audit: type=1300 audit(1769216464.462:957): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6662ab90 a2=3 a3=0 items=0 ppid=1 pid=5891 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:04.533619 kernel: audit: type=1327 audit(1769216464.462:957): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:04.535524 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 24 01:01:04.547000 audit[5891]: USER_START pid=5891 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:04.560000 audit[5895]: CRED_ACQ pid=5895 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:04.615958 kernel: audit: type=1105 audit(1769216464.547:958): pid=5891 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:04.616111 kernel: audit: type=1103 audit(1769216464.560:959): pid=5895 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:04.955806 sshd[5895]: Connection closed by 10.0.0.1 port 49096 Jan 24 01:01:04.959581 sshd-session[5891]: pam_unix(sshd:session): session closed for user core Jan 24 01:01:04.960000 audit[5891]: USER_END pid=5891 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:04.976630 systemd[1]: sshd@29-10.0.0.104:22-10.0.0.1:49096.service: Deactivated successfully. Jan 24 01:01:04.988516 systemd[1]: session-31.scope: Deactivated successfully. Jan 24 01:01:05.012132 kernel: audit: type=1106 audit(1769216464.960:960): pid=5891 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:05.012270 kernel: audit: type=1104 audit(1769216464.963:961): pid=5891 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:04.963000 audit[5891]: CRED_DISP pid=5891 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:04.999684 systemd-logind[1585]: Session 31 logged out. Waiting for processes to exit. Jan 24 01:01:05.005486 systemd-logind[1585]: Removed session 31. Jan 24 01:01:04.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.104:22-10.0.0.1:49096 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:06.969399 kubelet[2883]: E0124 01:01:06.969213 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 01:01:06.975603 kubelet[2883]: E0124 01:01:06.972081 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691" Jan 24 01:01:07.625953 containerd[1609]: time="2026-01-24T01:01:07.625791807Z" level=info msg="container event discarded" container=95ef124363e83c47865bddb37e8bd620a0b7e60bfa2ad28361ce4b9657cb5915 type=CONTAINER_CREATED_EVENT Jan 24 01:01:07.625953 containerd[1609]: time="2026-01-24T01:01:07.625910409Z" level=info msg="container event discarded" container=95ef124363e83c47865bddb37e8bd620a0b7e60bfa2ad28361ce4b9657cb5915 type=CONTAINER_STARTED_EVENT Jan 24 01:01:07.853131 containerd[1609]: time="2026-01-24T01:01:07.852767427Z" level=info msg="container event discarded" container=2715afafae0f39302d2a9d2fe25c0f000f9f2a5bfb51e3b5070d30ac8911bd50 type=CONTAINER_CREATED_EVENT Jan 24 01:01:07.927640 containerd[1609]: time="2026-01-24T01:01:07.927325464Z" level=info msg="container event discarded" container=86dca9fcf32656a9c1c5c0fd0f5bceb5acf3f6d610a4137b16a100c5036aa9fb type=CONTAINER_CREATED_EVENT Jan 24 01:01:07.927640 containerd[1609]: time="2026-01-24T01:01:07.927406515Z" level=info msg="container event discarded" container=86dca9fcf32656a9c1c5c0fd0f5bceb5acf3f6d610a4137b16a100c5036aa9fb type=CONTAINER_STARTED_EVENT Jan 24 01:01:07.972187 kubelet[2883]: E0124 01:01:07.969803 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 01:01:08.390223 containerd[1609]: time="2026-01-24T01:01:08.390068693Z" level=info msg="container event discarded" container=2715afafae0f39302d2a9d2fe25c0f000f9f2a5bfb51e3b5070d30ac8911bd50 type=CONTAINER_STARTED_EVENT Jan 24 01:01:10.000590 systemd[1]: Started sshd@30-10.0.0.104:22-10.0.0.1:49108.service - OpenSSH per-connection server daemon (10.0.0.1:49108). Jan 24 01:01:10.008886 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:01:10.008979 kernel: audit: type=1130 audit(1769216469.998:963): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.104:22-10.0.0.1:49108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:09.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.104:22-10.0.0.1:49108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:10.111000 audit[5915]: USER_ACCT pid=5915 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:10.124649 sshd[5915]: Accepted publickey for core from 10.0.0.1 port 49108 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:01:10.125199 kernel: audit: type=1101 audit(1769216470.111:964): pid=5915 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:10.131372 sshd-session[5915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:01:10.125000 audit[5915]: CRED_ACQ pid=5915 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:10.145783 kernel: audit: type=1103 audit(1769216470.125:965): pid=5915 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:10.149157 systemd-logind[1585]: New session 32 of user core. Jan 24 01:01:10.125000 audit[5915]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffbe46f630 a2=3 a3=0 items=0 ppid=1 pid=5915 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:10.172077 kernel: audit: type=1006 audit(1769216470.125:966): pid=5915 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Jan 24 01:01:10.172322 kernel: audit: type=1300 audit(1769216470.125:966): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffbe46f630 a2=3 a3=0 items=0 ppid=1 pid=5915 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:10.172413 kernel: audit: type=1327 audit(1769216470.125:966): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:10.125000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:10.179243 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 24 01:01:10.184000 audit[5915]: USER_START pid=5915 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:10.219874 kernel: audit: type=1105 audit(1769216470.184:967): pid=5915 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:10.220013 kernel: audit: type=1103 audit(1769216470.188:968): pid=5919 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:10.188000 audit[5919]: CRED_ACQ pid=5919 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:10.371691 sshd[5919]: Connection closed by 10.0.0.1 port 49108 Jan 24 01:01:10.372570 sshd-session[5915]: pam_unix(sshd:session): session closed for user core Jan 24 01:01:10.374000 audit[5915]: USER_END pid=5915 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:10.385445 systemd[1]: sshd@30-10.0.0.104:22-10.0.0.1:49108.service: Deactivated successfully. Jan 24 01:01:10.388428 systemd[1]: session-32.scope: Deactivated successfully. Jan 24 01:01:10.391134 systemd-logind[1585]: Session 32 logged out. Waiting for processes to exit. Jan 24 01:01:10.374000 audit[5915]: CRED_DISP pid=5915 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:10.394785 kernel: audit: type=1106 audit(1769216470.374:969): pid=5915 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:10.394896 kernel: audit: type=1104 audit(1769216470.374:970): pid=5915 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:10.395045 systemd-logind[1585]: Removed session 32. Jan 24 01:01:10.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.104:22-10.0.0.1:49108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:12.969257 kubelet[2883]: E0124 01:01:12.969166 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 01:01:13.969520 kubelet[2883]: E0124 01:01:13.969321 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:01:13.971588 kubelet[2883]: E0124 01:01:13.971377 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 01:01:13.972439 kubelet[2883]: E0124 01:01:13.971972 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:01:14.978809 kubelet[2883]: E0124 01:01:14.966425 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 01:01:15.421119 systemd[1]: Started sshd@31-10.0.0.104:22-10.0.0.1:40106.service - OpenSSH per-connection server daemon (10.0.0.1:40106). Jan 24 01:01:15.428407 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:01:15.428547 kernel: audit: type=1130 audit(1769216475.420:972): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.104:22-10.0.0.1:40106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:15.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.104:22-10.0.0.1:40106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:15.704139 sshd[5933]: Accepted publickey for core from 10.0.0.1 port 40106 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:01:15.702000 audit[5933]: USER_ACCT pid=5933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:15.711427 sshd-session[5933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:01:15.759946 kernel: audit: type=1101 audit(1769216475.702:973): pid=5933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:15.760086 kernel: audit: type=1103 audit(1769216475.707:974): pid=5933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:15.707000 audit[5933]: CRED_ACQ pid=5933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:15.755432 systemd-logind[1585]: New session 33 of user core. Jan 24 01:01:15.769787 kernel: audit: type=1006 audit(1769216475.707:975): pid=5933 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=33 res=1 Jan 24 01:01:15.707000 audit[5933]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefd78b640 a2=3 a3=0 items=0 ppid=1 pid=5933 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:15.805808 kernel: audit: type=1300 audit(1769216475.707:975): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefd78b640 a2=3 a3=0 items=0 ppid=1 pid=5933 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:15.814487 kernel: audit: type=1327 audit(1769216475.707:975): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:15.707000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:15.836330 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 24 01:01:15.873000 audit[5933]: USER_START pid=5933 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:15.942650 kernel: audit: type=1105 audit(1769216475.873:976): pid=5933 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:15.914000 audit[5937]: CRED_ACQ pid=5937 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:16.001992 kernel: audit: type=1103 audit(1769216475.914:977): pid=5937 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:16.533603 sshd[5937]: Connection closed by 10.0.0.1 port 40106 Jan 24 01:01:16.535523 sshd-session[5933]: pam_unix(sshd:session): session closed for user core Jan 24 01:01:16.540000 audit[5933]: USER_END pid=5933 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:16.549061 systemd[1]: sshd@31-10.0.0.104:22-10.0.0.1:40106.service: Deactivated successfully. Jan 24 01:01:16.550458 systemd-logind[1585]: Session 33 logged out. Waiting for processes to exit. Jan 24 01:01:16.553093 systemd[1]: session-33.scope: Deactivated successfully. Jan 24 01:01:16.560003 kernel: audit: type=1106 audit(1769216476.540:978): pid=5933 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:16.560115 kernel: audit: type=1104 audit(1769216476.540:979): pid=5933 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:16.540000 audit[5933]: CRED_DISP pid=5933 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:16.556356 systemd-logind[1585]: Removed session 33. Jan 24 01:01:16.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.104:22-10.0.0.1:40106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:17.810374 containerd[1609]: time="2026-01-24T01:01:17.789175789Z" level=info msg="container event discarded" container=b44d29a801cc5eb6c3dfdca21fea9d3ab892f6f8c06ed3bd78237d68ed63d3c6 type=CONTAINER_CREATED_EVENT Jan 24 01:01:19.067141 containerd[1609]: time="2026-01-24T01:01:18.491889538Z" level=info msg="container event discarded" container=b44d29a801cc5eb6c3dfdca21fea9d3ab892f6f8c06ed3bd78237d68ed63d3c6 type=CONTAINER_STARTED_EVENT Jan 24 01:01:21.367569 kubelet[2883]: E0124 01:01:21.355085 2883 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.977s" Jan 24 01:01:21.492997 kubelet[2883]: E0124 01:01:21.489166 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 01:01:21.494814 kubelet[2883]: E0124 01:01:21.494693 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691" Jan 24 01:01:21.606693 systemd[1]: Started sshd@32-10.0.0.104:22-10.0.0.1:40120.service - OpenSSH per-connection server daemon (10.0.0.1:40120). Jan 24 01:01:21.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.104:22-10.0.0.1:40120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:21.627570 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:01:21.627801 kernel: audit: type=1130 audit(1769216481.608:981): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.104:22-10.0.0.1:40120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:21.978520 kubelet[2883]: E0124 01:01:21.976698 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:01:21.990933 kubelet[2883]: E0124 01:01:21.984093 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:01:22.042000 audit[5950]: USER_ACCT pid=5950 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:22.054782 sshd[5950]: Accepted publickey for core from 10.0.0.1 port 40120 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:01:22.057794 sshd-session[5950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:01:22.094157 kernel: audit: type=1101 audit(1769216482.042:982): pid=5950 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:22.094285 kernel: audit: type=1103 audit(1769216482.053:983): pid=5950 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:22.053000 audit[5950]: CRED_ACQ pid=5950 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:22.096642 systemd-logind[1585]: New session 34 of user core. Jan 24 01:01:22.130012 kernel: audit: type=1006 audit(1769216482.053:984): pid=5950 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=34 res=1 Jan 24 01:01:22.130173 kernel: audit: type=1300 audit(1769216482.053:984): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe641831c0 a2=3 a3=0 items=0 ppid=1 pid=5950 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:22.053000 audit[5950]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe641831c0 a2=3 a3=0 items=0 ppid=1 pid=5950 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:22.174446 kernel: audit: type=1327 audit(1769216482.053:984): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:22.053000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:22.184906 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 24 01:01:22.205000 audit[5950]: USER_START pid=5950 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:22.209000 audit[5954]: CRED_ACQ pid=5954 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:22.276821 kernel: audit: type=1105 audit(1769216482.205:985): pid=5950 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:22.277024 kernel: audit: type=1103 audit(1769216482.209:986): pid=5954 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:22.639083 sshd[5954]: Connection closed by 10.0.0.1 port 40120 Jan 24 01:01:22.640463 sshd-session[5950]: pam_unix(sshd:session): session closed for user core Jan 24 01:01:22.642000 audit[5950]: USER_END pid=5950 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:22.652990 systemd[1]: sshd@32-10.0.0.104:22-10.0.0.1:40120.service: Deactivated successfully. Jan 24 01:01:22.657293 systemd[1]: session-34.scope: Deactivated successfully. Jan 24 01:01:22.659817 systemd-logind[1585]: Session 34 logged out. Waiting for processes to exit. Jan 24 01:01:22.662658 systemd-logind[1585]: Removed session 34. Jan 24 01:01:22.642000 audit[5950]: CRED_DISP pid=5950 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:22.684848 kernel: audit: type=1106 audit(1769216482.642:987): pid=5950 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:22.685062 kernel: audit: type=1104 audit(1769216482.642:988): pid=5950 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:22.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.104:22-10.0.0.1:40120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:22.969782 kubelet[2883]: E0124 01:01:22.968059 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 01:01:27.671202 systemd[1]: Started sshd@33-10.0.0.104:22-10.0.0.1:44114.service - OpenSSH per-connection server daemon (10.0.0.1:44114). Jan 24 01:01:27.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.104:22-10.0.0.1:44114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:27.674812 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:01:27.674914 kernel: audit: type=1130 audit(1769216487.669:990): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.104:22-10.0.0.1:44114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:27.981946 kubelet[2883]: E0124 01:01:27.981612 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 01:01:27.994278 kubelet[2883]: E0124 01:01:27.994050 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 01:01:28.013000 audit[5994]: USER_ACCT pid=5994 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:28.016870 sshd[5994]: Accepted publickey for core from 10.0.0.1 port 44114 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:01:28.023120 sshd-session[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:01:28.017000 audit[5994]: CRED_ACQ pid=5994 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:28.046038 kernel: audit: type=1101 audit(1769216488.013:991): pid=5994 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:28.046192 kernel: audit: type=1103 audit(1769216488.017:992): pid=5994 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:28.046236 kernel: audit: type=1006 audit(1769216488.017:993): pid=5994 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=35 res=1 Jan 24 01:01:28.052268 systemd-logind[1585]: New session 35 of user core. Jan 24 01:01:28.017000 audit[5994]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd22b836f0 a2=3 a3=0 items=0 ppid=1 pid=5994 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:28.072219 kernel: audit: type=1300 audit(1769216488.017:993): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd22b836f0 a2=3 a3=0 items=0 ppid=1 pid=5994 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:28.072339 kernel: audit: type=1327 audit(1769216488.017:993): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:28.017000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:28.086078 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 24 01:01:28.093000 audit[5994]: USER_START pid=5994 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:28.127248 kernel: audit: type=1105 audit(1769216488.093:994): pid=5994 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:28.127372 kernel: audit: type=1103 audit(1769216488.096:995): pid=5998 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:28.096000 audit[5998]: CRED_ACQ pid=5998 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:28.444677 sshd[5998]: Connection closed by 10.0.0.1 port 44114 Jan 24 01:01:28.450140 sshd-session[5994]: pam_unix(sshd:session): session closed for user core Jan 24 01:01:28.456000 audit[5994]: USER_END pid=5994 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:28.487100 systemd[1]: sshd@33-10.0.0.104:22-10.0.0.1:44114.service: Deactivated successfully. Jan 24 01:01:28.456000 audit[5994]: CRED_DISP pid=5994 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:28.509556 systemd[1]: session-35.scope: Deactivated successfully. Jan 24 01:01:28.522968 kernel: audit: type=1106 audit(1769216488.456:996): pid=5994 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:28.523056 kernel: audit: type=1104 audit(1769216488.456:997): pid=5994 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:28.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.104:22-10.0.0.1:44114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:28.547124 systemd-logind[1585]: Session 35 logged out. Waiting for processes to exit. Jan 24 01:01:28.563134 systemd-logind[1585]: Removed session 35. Jan 24 01:01:28.970536 kubelet[2883]: E0124 01:01:28.969784 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 01:01:33.480771 systemd[1]: Started sshd@34-10.0.0.104:22-10.0.0.1:38806.service - OpenSSH per-connection server daemon (10.0.0.1:38806). Jan 24 01:01:33.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.104:22-10.0.0.1:38806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:33.491872 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:01:33.491995 kernel: audit: type=1130 audit(1769216493.482:999): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.104:22-10.0.0.1:38806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:33.670000 audit[6014]: USER_ACCT pid=6014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:33.680939 sshd[6014]: Accepted publickey for core from 10.0.0.1 port 38806 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:01:33.690117 sshd-session[6014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:01:33.703808 kernel: audit: type=1101 audit(1769216493.670:1000): pid=6014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:33.682000 audit[6014]: CRED_ACQ pid=6014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:33.727841 systemd-logind[1585]: New session 36 of user core. Jan 24 01:01:33.747787 kernel: audit: type=1103 audit(1769216493.682:1001): pid=6014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:33.755162 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 24 01:01:33.682000 audit[6014]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd969d7d50 a2=3 a3=0 items=0 ppid=1 pid=6014 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:33.795041 kernel: audit: type=1006 audit(1769216493.682:1002): pid=6014 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=36 res=1 Jan 24 01:01:33.795160 kernel: audit: type=1300 audit(1769216493.682:1002): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd969d7d50 a2=3 a3=0 items=0 ppid=1 pid=6014 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:33.682000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:33.804816 kernel: audit: type=1327 audit(1769216493.682:1002): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:33.807411 kernel: audit: type=1105 audit(1769216493.763:1003): pid=6014 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:33.763000 audit[6014]: USER_START pid=6014 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:33.769000 audit[6018]: CRED_ACQ pid=6018 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:33.844554 kernel: audit: type=1103 audit(1769216493.769:1004): pid=6018 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:33.972610 kubelet[2883]: E0124 01:01:33.972483 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:01:33.982797 kubelet[2883]: E0124 01:01:33.982419 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 01:01:34.023000 audit[6014]: USER_END pid=6014 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:34.031033 sshd[6018]: Connection closed by 10.0.0.1 port 38806 Jan 24 01:01:34.023014 sshd-session[6014]: pam_unix(sshd:session): session closed for user core Jan 24 01:01:34.035627 systemd[1]: sshd@34-10.0.0.104:22-10.0.0.1:38806.service: Deactivated successfully. Jan 24 01:01:34.035783 systemd-logind[1585]: Session 36 logged out. Waiting for processes to exit. Jan 24 01:01:34.039822 systemd[1]: session-36.scope: Deactivated successfully. Jan 24 01:01:34.043364 systemd-logind[1585]: Removed session 36. Jan 24 01:01:34.023000 audit[6014]: CRED_DISP pid=6014 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:34.060225 kernel: audit: type=1106 audit(1769216494.023:1005): pid=6014 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:34.060325 kernel: audit: type=1104 audit(1769216494.023:1006): pid=6014 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:34.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.104:22-10.0.0.1:38806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:34.968641 kubelet[2883]: E0124 01:01:34.968576 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 01:01:36.969309 kubelet[2883]: E0124 01:01:36.969025 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691" Jan 24 01:01:37.968048 kubelet[2883]: E0124 01:01:37.967909 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:01:39.072615 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:01:39.072806 kernel: audit: type=1130 audit(1769216499.064:1008): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.104:22-10.0.0.1:38808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:39.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.104:22-10.0.0.1:38808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:39.064642 systemd[1]: Started sshd@35-10.0.0.104:22-10.0.0.1:38808.service - OpenSSH per-connection server daemon (10.0.0.1:38808). Jan 24 01:01:39.221000 audit[6032]: USER_ACCT pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:39.230471 sshd[6032]: Accepted publickey for core from 10.0.0.1 port 38808 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:01:39.231436 sshd-session[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:01:39.239969 kernel: audit: type=1101 audit(1769216499.221:1009): pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:39.226000 audit[6032]: CRED_ACQ pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:39.271074 kernel: audit: type=1103 audit(1769216499.226:1010): pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:39.271252 kernel: audit: type=1006 audit(1769216499.226:1011): pid=6032 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=37 res=1 Jan 24 01:01:39.271305 kernel: audit: type=1300 audit(1769216499.226:1011): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffded9d4b50 a2=3 a3=0 items=0 ppid=1 pid=6032 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:39.226000 audit[6032]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffded9d4b50 a2=3 a3=0 items=0 ppid=1 pid=6032 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:39.273482 systemd-logind[1585]: New session 37 of user core. Jan 24 01:01:39.226000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:39.292091 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 24 01:01:39.293573 kernel: audit: type=1327 audit(1769216499.226:1011): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:39.304000 audit[6032]: USER_START pid=6032 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:39.309000 audit[6036]: CRED_ACQ pid=6036 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:39.354303 kernel: audit: type=1105 audit(1769216499.304:1012): pid=6032 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:39.354454 kernel: audit: type=1103 audit(1769216499.309:1013): pid=6036 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:39.501677 sshd[6036]: Connection closed by 10.0.0.1 port 38808 Jan 24 01:01:39.502261 sshd-session[6032]: pam_unix(sshd:session): session closed for user core Jan 24 01:01:39.503000 audit[6032]: USER_END pid=6032 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:39.503000 audit[6032]: CRED_DISP pid=6032 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:39.543807 kernel: audit: type=1106 audit(1769216499.503:1014): pid=6032 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:39.544026 kernel: audit: type=1104 audit(1769216499.503:1015): pid=6032 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:39.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.104:22-10.0.0.1:38808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:39.563396 systemd[1]: sshd@35-10.0.0.104:22-10.0.0.1:38808.service: Deactivated successfully. Jan 24 01:01:39.568000 systemd[1]: session-37.scope: Deactivated successfully. Jan 24 01:01:39.579057 systemd-logind[1585]: Session 37 logged out. Waiting for processes to exit. Jan 24 01:01:39.587874 systemd[1]: Started sshd@36-10.0.0.104:22-10.0.0.1:38810.service - OpenSSH per-connection server daemon (10.0.0.1:38810). Jan 24 01:01:39.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.104:22-10.0.0.1:38810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:39.594367 systemd-logind[1585]: Removed session 37. Jan 24 01:01:39.732663 sshd[6050]: Accepted publickey for core from 10.0.0.1 port 38810 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:01:39.728000 audit[6050]: USER_ACCT pid=6050 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:39.732000 audit[6050]: CRED_ACQ pid=6050 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:39.732000 audit[6050]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe22fb0b20 a2=3 a3=0 items=0 ppid=1 pid=6050 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:39.732000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:39.735567 sshd-session[6050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:01:39.769023 systemd-logind[1585]: New session 38 of user core. Jan 24 01:01:39.782231 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 24 01:01:39.790000 audit[6050]: USER_START pid=6050 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:39.800000 audit[6054]: CRED_ACQ pid=6054 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:40.612567 sshd[6054]: Connection closed by 10.0.0.1 port 38810 Jan 24 01:01:40.611873 sshd-session[6050]: pam_unix(sshd:session): session closed for user core Jan 24 01:01:40.616000 audit[6050]: USER_END pid=6050 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:40.616000 audit[6050]: CRED_DISP pid=6050 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:40.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.104:22-10.0.0.1:38810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:40.641087 systemd[1]: sshd@36-10.0.0.104:22-10.0.0.1:38810.service: Deactivated successfully. Jan 24 01:01:40.645353 systemd[1]: session-38.scope: Deactivated successfully. Jan 24 01:01:40.650113 systemd-logind[1585]: Session 38 logged out. Waiting for processes to exit. Jan 24 01:01:40.654532 systemd[1]: Started sshd@37-10.0.0.104:22-10.0.0.1:38818.service - OpenSSH per-connection server daemon (10.0.0.1:38818). Jan 24 01:01:40.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.104:22-10.0.0.1:38818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:40.661263 systemd-logind[1585]: Removed session 38. Jan 24 01:01:40.779000 audit[6068]: USER_ACCT pid=6068 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:40.782004 sshd[6068]: Accepted publickey for core from 10.0.0.1 port 38818 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:01:40.786000 audit[6068]: CRED_ACQ pid=6068 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:40.786000 audit[6068]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd81618800 a2=3 a3=0 items=0 ppid=1 pid=6068 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=39 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:40.786000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:40.790587 sshd-session[6068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:01:40.804827 systemd-logind[1585]: New session 39 of user core. Jan 24 01:01:40.812420 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 24 01:01:40.824000 audit[6068]: USER_START pid=6068 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:40.829000 audit[6072]: CRED_ACQ pid=6072 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:41.974242 kubelet[2883]: E0124 01:01:41.974149 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 01:01:41.984212 kubelet[2883]: E0124 01:01:41.984028 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 01:01:41.991788 kubelet[2883]: E0124 01:01:41.991387 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 01:01:42.100520 sshd[6072]: Connection closed by 10.0.0.1 port 38818 Jan 24 01:01:42.098316 sshd-session[6068]: pam_unix(sshd:session): session closed for user core Jan 24 01:01:42.100000 audit[6068]: USER_END pid=6068 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:42.102000 audit[6068]: CRED_DISP pid=6068 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:42.113072 systemd[1]: sshd@37-10.0.0.104:22-10.0.0.1:38818.service: Deactivated successfully. Jan 24 01:01:42.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.104:22-10.0.0.1:38818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:42.116337 systemd[1]: session-39.scope: Deactivated successfully. Jan 24 01:01:42.126078 systemd-logind[1585]: Session 39 logged out. Waiting for processes to exit. Jan 24 01:01:42.124000 audit[6087]: NETFILTER_CFG table=filter:147 family=2 entries=26 op=nft_register_rule pid=6087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 01:01:42.124000 audit[6087]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc00106380 a2=0 a3=7ffc0010636c items=0 ppid=3048 pid=6087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:42.124000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 01:01:42.130893 systemd[1]: Started sshd@38-10.0.0.104:22-10.0.0.1:38832.service - OpenSSH per-connection server daemon (10.0.0.1:38832). Jan 24 01:01:42.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.104:22-10.0.0.1:38832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:42.133000 audit[6087]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=6087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 01:01:42.133000 audit[6087]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc00106380 a2=0 a3=0 items=0 ppid=3048 pid=6087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:42.133000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 01:01:42.137648 systemd-logind[1585]: Removed session 39. Jan 24 01:01:42.198000 audit[6092]: NETFILTER_CFG table=filter:149 family=2 entries=38 op=nft_register_rule pid=6092 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 01:01:42.198000 audit[6092]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffcfa5613e0 a2=0 a3=7ffcfa5613cc items=0 ppid=3048 pid=6092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:42.198000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 01:01:42.207000 audit[6092]: NETFILTER_CFG table=nat:150 family=2 entries=20 op=nft_register_rule pid=6092 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 01:01:42.207000 audit[6092]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcfa5613e0 a2=0 a3=0 items=0 ppid=3048 pid=6092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:42.207000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 01:01:42.408000 audit[6089]: USER_ACCT pid=6089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:42.410000 audit[6089]: CRED_ACQ pid=6089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:42.412537 sshd[6089]: Accepted publickey for core from 10.0.0.1 port 38832 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:01:42.411000 audit[6089]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff93fbde40 a2=3 a3=0 items=0 ppid=1 pid=6089 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=40 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:42.411000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:42.414569 sshd-session[6089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:01:42.438846 systemd-logind[1585]: New session 40 of user core. Jan 24 01:01:42.450353 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 24 01:01:42.457000 audit[6089]: USER_START pid=6089 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:42.464000 audit[6096]: CRED_ACQ pid=6096 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:42.844861 containerd[1609]: time="2026-01-24T01:01:42.843478204Z" level=info msg="container event discarded" container=58a2087381c0884d3b4d3303a1cb8787068b7bb376721bbc31edfb7c650d6e7f type=CONTAINER_CREATED_EVENT Jan 24 01:01:42.844861 containerd[1609]: time="2026-01-24T01:01:42.843537414Z" level=info msg="container event discarded" container=58a2087381c0884d3b4d3303a1cb8787068b7bb376721bbc31edfb7c650d6e7f type=CONTAINER_STARTED_EVENT Jan 24 01:01:42.983344 containerd[1609]: time="2026-01-24T01:01:42.983168317Z" level=info msg="container event discarded" container=2112e1ce16d0b72519c1b74279a3be569e18320c62df6fa82870bb19742ad179 type=CONTAINER_CREATED_EVENT Jan 24 01:01:42.983344 containerd[1609]: time="2026-01-24T01:01:42.983251903Z" level=info msg="container event discarded" container=2112e1ce16d0b72519c1b74279a3be569e18320c62df6fa82870bb19742ad179 type=CONTAINER_STARTED_EVENT Jan 24 01:01:43.160678 sshd[6096]: Connection closed by 10.0.0.1 port 38832 Jan 24 01:01:43.165475 sshd-session[6089]: pam_unix(sshd:session): session closed for user core Jan 24 01:01:43.174000 audit[6089]: USER_END pid=6089 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:43.178000 audit[6089]: CRED_DISP pid=6089 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:43.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.104:22-10.0.0.1:48528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:43.184226 systemd[1]: Started sshd@39-10.0.0.104:22-10.0.0.1:48528.service - OpenSSH per-connection server daemon (10.0.0.1:48528). Jan 24 01:01:43.193290 systemd[1]: sshd@38-10.0.0.104:22-10.0.0.1:38832.service: Deactivated successfully. Jan 24 01:01:43.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.104:22-10.0.0.1:38832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:43.200595 systemd[1]: session-40.scope: Deactivated successfully. Jan 24 01:01:43.400476 systemd-logind[1585]: Session 40 logged out. Waiting for processes to exit. Jan 24 01:01:43.500340 systemd-logind[1585]: Removed session 40. Jan 24 01:01:45.098000 audit[6105]: USER_ACCT pid=6105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:45.110449 sshd[6105]: Accepted publickey for core from 10.0.0.1 port 48528 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:01:45.131818 kernel: kauditd_printk_skb: 47 callbacks suppressed Jan 24 01:01:45.136219 kernel: audit: type=1101 audit(1769216505.098:1049): pid=6105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:45.120029 sshd-session[6105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:01:45.165542 systemd-logind[1585]: New session 41 of user core. Jan 24 01:01:45.190084 kernel: audit: type=1103 audit(1769216505.108:1050): pid=6105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:45.108000 audit[6105]: CRED_ACQ pid=6105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:45.287447 kernel: audit: type=1006 audit(1769216505.108:1051): pid=6105 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=41 res=1 Jan 24 01:01:45.108000 audit[6105]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea6109e40 a2=3 a3=0 items=0 ppid=1 pid=6105 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=41 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:45.449054 kernel: audit: type=1300 audit(1769216505.108:1051): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea6109e40 a2=3 a3=0 items=0 ppid=1 pid=6105 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=41 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:45.108000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:45.491076 kernel: audit: type=1327 audit(1769216505.108:1051): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:45.497327 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 24 01:01:45.514000 audit[6105]: USER_START pid=6105 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:45.569391 kernel: audit: type=1105 audit(1769216505.514:1052): pid=6105 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:45.569505 kernel: audit: type=1103 audit(1769216505.521:1053): pid=6112 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:45.521000 audit[6112]: CRED_ACQ pid=6112 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:46.068373 sshd[6112]: Connection closed by 10.0.0.1 port 48528 Jan 24 01:01:46.125861 sshd-session[6105]: pam_unix(sshd:session): session closed for user core Jan 24 01:01:46.146000 audit[6105]: USER_END pid=6105 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:46.207324 systemd-logind[1585]: Session 41 logged out. Waiting for processes to exit. Jan 24 01:01:46.211078 systemd[1]: sshd@39-10.0.0.104:22-10.0.0.1:48528.service: Deactivated successfully. Jan 24 01:01:46.220625 systemd[1]: session-41.scope: Deactivated successfully. Jan 24 01:01:46.146000 audit[6105]: CRED_DISP pid=6105 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:46.248486 systemd-logind[1585]: Removed session 41. Jan 24 01:01:46.251575 kernel: audit: type=1106 audit(1769216506.146:1054): pid=6105 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:46.251673 kernel: audit: type=1104 audit(1769216506.146:1055): pid=6105 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:46.251800 kernel: audit: type=1131 audit(1769216506.206:1056): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.104:22-10.0.0.1:48528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:46.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.104:22-10.0.0.1:48528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:47.978271 kubelet[2883]: E0124 01:01:47.977694 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 01:01:48.000254 kubelet[2883]: E0124 01:01:48.000158 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691" Jan 24 01:01:48.978793 kubelet[2883]: E0124 01:01:48.971558 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 01:01:49.603370 containerd[1609]: time="2026-01-24T01:01:49.603143098Z" level=info msg="container event discarded" container=d5b161374dd0270f95d00c80bbddbd4cbf3eb8c15fbe50498a6705900b26173c type=CONTAINER_CREATED_EVENT Jan 24 01:01:50.065292 containerd[1609]: time="2026-01-24T01:01:50.058896338Z" level=info msg="container event discarded" container=d5b161374dd0270f95d00c80bbddbd4cbf3eb8c15fbe50498a6705900b26173c type=CONTAINER_STARTED_EVENT Jan 24 01:01:51.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.104:22-10.0.0.1:48544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:51.096215 systemd[1]: Started sshd@40-10.0.0.104:22-10.0.0.1:48544.service - OpenSSH per-connection server daemon (10.0.0.1:48544). Jan 24 01:01:51.105828 kernel: audit: type=1130 audit(1769216511.094:1057): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.104:22-10.0.0.1:48544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:51.226602 containerd[1609]: time="2026-01-24T01:01:51.226459990Z" level=info msg="container event discarded" container=892dddbae052f7e80bdbd418448ba8c1cb41c4d1e9d0a5d67410760838a5649b type=CONTAINER_CREATED_EVENT Jan 24 01:01:51.228000 audit[6126]: USER_ACCT pid=6126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:51.250385 systemd-logind[1585]: New session 42 of user core. Jan 24 01:01:51.258825 kernel: audit: type=1101 audit(1769216511.228:1058): pid=6126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:51.239876 sshd-session[6126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:01:51.259281 sshd[6126]: Accepted publickey for core from 10.0.0.1 port 48544 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:01:51.232000 audit[6126]: CRED_ACQ pid=6126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:51.278267 kernel: audit: type=1103 audit(1769216511.232:1059): pid=6126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:51.278336 kernel: audit: type=1006 audit(1769216511.232:1060): pid=6126 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=42 res=1 Jan 24 01:01:51.232000 audit[6126]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca37e4850 a2=3 a3=0 items=0 ppid=1 pid=6126 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=42 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:51.280173 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 24 01:01:51.297885 kernel: audit: type=1300 audit(1769216511.232:1060): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca37e4850 a2=3 a3=0 items=0 ppid=1 pid=6126 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=42 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:51.297943 kernel: audit: type=1327 audit(1769216511.232:1060): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:51.232000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:51.297000 audit[6126]: USER_START pid=6126 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:51.338201 kernel: audit: type=1105 audit(1769216511.297:1061): pid=6126 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:51.305000 audit[6130]: CRED_ACQ pid=6130 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:51.357917 kernel: audit: type=1103 audit(1769216511.305:1062): pid=6130 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:51.503264 sshd[6130]: Connection closed by 10.0.0.1 port 48544 Jan 24 01:01:51.505194 sshd-session[6126]: pam_unix(sshd:session): session closed for user core Jan 24 01:01:51.509000 audit[6126]: USER_END pid=6126 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:51.526554 systemd[1]: sshd@40-10.0.0.104:22-10.0.0.1:48544.service: Deactivated successfully. Jan 24 01:01:51.538149 systemd[1]: session-42.scope: Deactivated successfully. Jan 24 01:01:51.540684 kernel: audit: type=1106 audit(1769216511.509:1063): pid=6126 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:51.510000 audit[6126]: CRED_DISP pid=6126 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:51.550373 systemd-logind[1585]: Session 42 logged out. Waiting for processes to exit. Jan 24 01:01:51.564297 systemd-logind[1585]: Removed session 42. Jan 24 01:01:51.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.104:22-10.0.0.1:48544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:51.574797 kernel: audit: type=1104 audit(1769216511.510:1064): pid=6126 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:52.084621 containerd[1609]: time="2026-01-24T01:01:52.084385991Z" level=info msg="container event discarded" container=892dddbae052f7e80bdbd418448ba8c1cb41c4d1e9d0a5d67410760838a5649b type=CONTAINER_STARTED_EVENT Jan 24 01:01:52.485205 containerd[1609]: time="2026-01-24T01:01:52.485099313Z" level=info msg="container event discarded" container=892dddbae052f7e80bdbd418448ba8c1cb41c4d1e9d0a5d67410760838a5649b type=CONTAINER_STOPPED_EVENT Jan 24 01:01:52.972052 kubelet[2883]: E0124 01:01:52.969676 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 01:01:53.974542 kubelet[2883]: E0124 01:01:53.973514 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 01:01:54.973814 kubelet[2883]: E0124 01:01:54.971279 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 01:01:56.531508 systemd[1]: Started sshd@41-10.0.0.104:22-10.0.0.1:38674.service - OpenSSH per-connection server daemon (10.0.0.1:38674). Jan 24 01:01:56.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.104:22-10.0.0.1:38674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:56.538134 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:01:56.538516 kernel: audit: type=1130 audit(1769216516.530:1066): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.104:22-10.0.0.1:38674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:56.672000 audit[6167]: USER_ACCT pid=6167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:56.677326 sshd[6167]: Accepted publickey for core from 10.0.0.1 port 38674 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:01:56.684914 sshd-session[6167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:01:56.681000 audit[6167]: CRED_ACQ pid=6167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:56.700289 systemd-logind[1585]: New session 43 of user core. Jan 24 01:01:56.710122 kernel: audit: type=1101 audit(1769216516.672:1067): pid=6167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:56.710222 kernel: audit: type=1103 audit(1769216516.681:1068): pid=6167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:56.710265 kernel: audit: type=1006 audit(1769216516.681:1069): pid=6167 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=43 res=1 Jan 24 01:01:56.719789 kernel: audit: type=1300 audit(1769216516.681:1069): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcdd22e820 a2=3 a3=0 items=0 ppid=1 pid=6167 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=43 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:56.681000 audit[6167]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcdd22e820 a2=3 a3=0 items=0 ppid=1 pid=6167 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=43 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:56.729381 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 24 01:01:56.681000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:56.753475 kernel: audit: type=1327 audit(1769216516.681:1069): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:56.753000 audit[6167]: USER_START pid=6167 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:56.762000 audit[6173]: CRED_ACQ pid=6173 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:56.796266 kernel: audit: type=1105 audit(1769216516.753:1070): pid=6167 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:56.799824 kernel: audit: type=1103 audit(1769216516.762:1071): pid=6173 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:56.988224 sshd[6173]: Connection closed by 10.0.0.1 port 38674 Jan 24 01:01:56.989636 sshd-session[6167]: pam_unix(sshd:session): session closed for user core Jan 24 01:01:56.996000 audit[6167]: USER_END pid=6167 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:57.007929 systemd[1]: sshd@41-10.0.0.104:22-10.0.0.1:38674.service: Deactivated successfully. Jan 24 01:01:57.014129 systemd[1]: session-43.scope: Deactivated successfully. Jan 24 01:01:57.033098 kernel: audit: type=1106 audit(1769216516.996:1072): pid=6167 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:56.996000 audit[6167]: CRED_DISP pid=6167 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:57.039214 systemd-logind[1585]: Session 43 logged out. Waiting for processes to exit. Jan 24 01:01:57.042167 systemd-logind[1585]: Removed session 43. Jan 24 01:01:57.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.104:22-10.0.0.1:38674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:57.054207 kernel: audit: type=1104 audit(1769216516.996:1073): pid=6167 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:00.970799 kubelet[2883]: E0124 01:02:00.969615 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691" Jan 24 01:02:01.975801 kubelet[2883]: E0124 01:02:01.975387 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 01:02:01.984180 kubelet[2883]: E0124 01:02:01.984124 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 01:02:02.024476 systemd[1]: Started sshd@42-10.0.0.104:22-10.0.0.1:38678.service - OpenSSH per-connection server daemon (10.0.0.1:38678). Jan 24 01:02:02.029889 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:02:02.029936 kernel: audit: type=1130 audit(1769216522.022:1075): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.104:22-10.0.0.1:38678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:02:02.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.104:22-10.0.0.1:38678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:02:02.180000 audit[6187]: USER_ACCT pid=6187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:02.186953 sshd-session[6187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:02:02.195480 sshd[6187]: Accepted publickey for core from 10.0.0.1 port 38678 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:02:02.220852 kernel: audit: type=1101 audit(1769216522.180:1076): pid=6187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:02.220957 kernel: audit: type=1103 audit(1769216522.183:1077): pid=6187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:02.183000 audit[6187]: CRED_ACQ pid=6187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:02.216404 systemd-logind[1585]: New session 44 of user core. Jan 24 01:02:02.232906 kernel: audit: type=1006 audit(1769216522.183:1078): pid=6187 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=44 res=1 Jan 24 01:02:02.233032 kernel: audit: type=1300 audit(1769216522.183:1078): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc52aa6ac0 a2=3 a3=0 items=0 ppid=1 pid=6187 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=44 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:02:02.183000 audit[6187]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc52aa6ac0 a2=3 a3=0 items=0 ppid=1 pid=6187 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=44 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:02:02.183000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:02:02.254116 kernel: audit: type=1327 audit(1769216522.183:1078): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:02:02.260165 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 24 01:02:02.267000 audit[6187]: USER_START pid=6187 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:02.287796 kernel: audit: type=1105 audit(1769216522.267:1079): pid=6187 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:02.302793 kernel: audit: type=1103 audit(1769216522.273:1080): pid=6191 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:02.273000 audit[6191]: CRED_ACQ pid=6191 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:02.522268 sshd[6191]: Connection closed by 10.0.0.1 port 38678 Jan 24 01:02:02.523936 sshd-session[6187]: pam_unix(sshd:session): session closed for user core Jan 24 01:02:02.527000 audit[6187]: USER_END pid=6187 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:02.533969 systemd[1]: sshd@42-10.0.0.104:22-10.0.0.1:38678.service: Deactivated successfully. Jan 24 01:02:02.538268 systemd[1]: session-44.scope: Deactivated successfully. Jan 24 01:02:02.544201 systemd-logind[1585]: Session 44 logged out. Waiting for processes to exit. Jan 24 01:02:02.546559 systemd-logind[1585]: Removed session 44. Jan 24 01:02:02.527000 audit[6187]: CRED_DISP pid=6187 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:02.570362 kernel: audit: type=1106 audit(1769216522.527:1081): pid=6187 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:02.570665 kernel: audit: type=1104 audit(1769216522.527:1082): pid=6187 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:02.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.104:22-10.0.0.1:38678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:02:04.646376 containerd[1609]: time="2026-01-24T01:02:04.645609884Z" level=info msg="container event discarded" container=692ce1a6fa3fe01bd406e64c43a9a5a8014a3b7d1e13a2bc5ff7438c315cc2f4 type=CONTAINER_CREATED_EVENT Jan 24 01:02:05.184802 containerd[1609]: time="2026-01-24T01:02:05.180955102Z" level=info msg="container event discarded" container=692ce1a6fa3fe01bd406e64c43a9a5a8014a3b7d1e13a2bc5ff7438c315cc2f4 type=CONTAINER_STARTED_EVENT Jan 24 01:02:05.992111 kubelet[2883]: E0124 01:02:05.990978 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 01:02:05.992111 kubelet[2883]: E0124 01:02:05.991785 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 01:02:06.967892 kubelet[2883]: E0124 01:02:06.967504 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:02:07.486407 containerd[1609]: time="2026-01-24T01:02:07.486296506Z" level=info msg="container event discarded" container=692ce1a6fa3fe01bd406e64c43a9a5a8014a3b7d1e13a2bc5ff7438c315cc2f4 type=CONTAINER_STOPPED_EVENT Jan 24 01:02:07.590308 systemd[1]: Started sshd@43-10.0.0.104:22-10.0.0.1:56680.service - OpenSSH per-connection server daemon (10.0.0.1:56680). Jan 24 01:02:07.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.104:22-10.0.0.1:56680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:02:07.637665 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:02:07.637933 kernel: audit: type=1130 audit(1769216527.592:1084): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.104:22-10.0.0.1:56680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:02:07.862000 audit[6207]: USER_ACCT pid=6207 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:07.871903 sshd[6207]: Accepted publickey for core from 10.0.0.1 port 56680 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:02:07.874262 sshd-session[6207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:02:07.890488 kernel: audit: type=1101 audit(1769216527.862:1085): pid=6207 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:07.870000 audit[6207]: CRED_ACQ pid=6207 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:07.922087 systemd-logind[1585]: New session 45 of user core. Jan 24 01:02:07.930543 kernel: audit: type=1103 audit(1769216527.870:1086): pid=6207 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:07.930640 kernel: audit: type=1006 audit(1769216527.870:1087): pid=6207 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=45 res=1 Jan 24 01:02:07.870000 audit[6207]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe80d87bd0 a2=3 a3=0 items=0 ppid=1 pid=6207 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=45 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:02:07.984628 kernel: audit: type=1300 audit(1769216527.870:1087): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe80d87bd0 a2=3 a3=0 items=0 ppid=1 pid=6207 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=45 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:02:07.870000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:02:07.999475 kernel: audit: type=1327 audit(1769216527.870:1087): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:02:08.001236 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 24 01:02:08.019285 kubelet[2883]: E0124 01:02:08.014644 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 01:02:08.048000 audit[6207]: USER_START pid=6207 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:08.085971 kernel: audit: type=1105 audit(1769216528.048:1088): pid=6207 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:08.062000 audit[6211]: CRED_ACQ pid=6211 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:08.103787 kernel: audit: type=1103 audit(1769216528.062:1089): pid=6211 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:08.317621 sshd[6211]: Connection closed by 10.0.0.1 port 56680 Jan 24 01:02:08.319187 sshd-session[6207]: pam_unix(sshd:session): session closed for user core Jan 24 01:02:08.323000 audit[6207]: USER_END pid=6207 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:08.361916 kernel: audit: type=1106 audit(1769216528.323:1090): pid=6207 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:08.362079 kernel: audit: type=1104 audit(1769216528.325:1091): pid=6207 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:08.325000 audit[6207]: CRED_DISP pid=6207 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:08.352442 systemd[1]: sshd@43-10.0.0.104:22-10.0.0.1:56680.service: Deactivated successfully. Jan 24 01:02:08.361242 systemd[1]: session-45.scope: Deactivated successfully. Jan 24 01:02:08.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.104:22-10.0.0.1:56680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:02:08.375241 systemd-logind[1585]: Session 45 logged out. Waiting for processes to exit. Jan 24 01:02:08.378631 systemd-logind[1585]: Removed session 45. Jan 24 01:02:08.636000 audit[6225]: NETFILTER_CFG table=filter:151 family=2 entries=26 op=nft_register_rule pid=6225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 01:02:08.636000 audit[6225]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd5c6723b0 a2=0 a3=7ffd5c67239c items=0 ppid=3048 pid=6225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:02:08.636000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 01:02:08.652000 audit[6225]: NETFILTER_CFG table=nat:152 family=2 entries=104 op=nft_register_chain pid=6225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 01:02:08.652000 audit[6225]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd5c6723b0 a2=0 a3=7ffd5c67239c items=0 ppid=3048 pid=6225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:02:08.652000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 01:02:13.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.104:22-10.0.0.1:53778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:02:13.341620 systemd[1]: Started sshd@44-10.0.0.104:22-10.0.0.1:53778.service - OpenSSH per-connection server daemon (10.0.0.1:53778). Jan 24 01:02:13.356884 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 24 01:02:13.357067 kernel: audit: type=1130 audit(1769216533.340:1095): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.104:22-10.0.0.1:53778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:02:13.539349 kernel: audit: type=1101 audit(1769216533.516:1096): pid=6249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:13.516000 audit[6249]: USER_ACCT pid=6249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:13.528071 sshd-session[6249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:02:13.540207 sshd[6249]: Accepted publickey for core from 10.0.0.1 port 53778 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:02:13.521000 audit[6249]: CRED_ACQ pid=6249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:13.555009 systemd-logind[1585]: New session 46 of user core. Jan 24 01:02:13.557840 kernel: audit: type=1103 audit(1769216533.521:1097): pid=6249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:13.521000 audit[6249]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec6554e70 a2=3 a3=0 items=0 ppid=1 pid=6249 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=46 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:02:13.592316 kernel: audit: type=1006 audit(1769216533.521:1098): pid=6249 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=46 res=1 Jan 24 01:02:13.592426 kernel: audit: type=1300 audit(1769216533.521:1098): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec6554e70 a2=3 a3=0 items=0 ppid=1 pid=6249 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=46 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:02:13.521000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:02:13.605056 kernel: audit: type=1327 audit(1769216533.521:1098): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:02:13.607259 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 24 01:02:13.618000 audit[6249]: USER_START pid=6249 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:13.629000 audit[6253]: CRED_ACQ pid=6253 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:13.665562 kernel: audit: type=1105 audit(1769216533.618:1099): pid=6249 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:13.665631 kernel: audit: type=1103 audit(1769216533.629:1100): pid=6253 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:13.861260 sshd[6253]: Connection closed by 10.0.0.1 port 53778 Jan 24 01:02:13.859279 sshd-session[6249]: pam_unix(sshd:session): session closed for user core Jan 24 01:02:13.862000 audit[6249]: USER_END pid=6249 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:13.870027 systemd[1]: sshd@44-10.0.0.104:22-10.0.0.1:53778.service: Deactivated successfully. Jan 24 01:02:13.873878 systemd[1]: session-46.scope: Deactivated successfully. Jan 24 01:02:13.876053 systemd-logind[1585]: Session 46 logged out. Waiting for processes to exit. Jan 24 01:02:13.889545 kernel: audit: type=1106 audit(1769216533.862:1101): pid=6249 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:13.889646 kernel: audit: type=1104 audit(1769216533.862:1102): pid=6249 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:13.862000 audit[6249]: CRED_DISP pid=6249 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:13.880315 systemd-logind[1585]: Removed session 46. Jan 24 01:02:13.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.104:22-10.0.0.1:53778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:02:13.972576 kubelet[2883]: E0124 01:02:13.972118 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 01:02:14.975515 kubelet[2883]: E0124 01:02:14.975403 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691" Jan 24 01:02:16.970826 kubelet[2883]: E0124 01:02:16.970534 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:02:16.978105 kubelet[2883]: E0124 01:02:16.978040 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-tkqwx" podUID="5a9025b1-4c1d-4d71-8add-e1566c4e04cc" Jan 24 01:02:18.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.104:22-10.0.0.1:53786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:02:18.902637 systemd[1]: Started sshd@45-10.0.0.104:22-10.0.0.1:53786.service - OpenSSH per-connection server daemon (10.0.0.1:53786). Jan 24 01:02:18.908451 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:02:18.908508 kernel: audit: type=1130 audit(1769216538.902:1104): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.104:22-10.0.0.1:53786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:02:19.077000 audit[6266]: USER_ACCT pid=6266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:19.084650 sshd[6266]: Accepted publickey for core from 10.0.0.1 port 53786 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:02:19.092043 sshd-session[6266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:02:19.101231 kernel: audit: type=1101 audit(1769216539.077:1105): pid=6266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:19.101327 kernel: audit: type=1103 audit(1769216539.084:1106): pid=6266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:19.084000 audit[6266]: CRED_ACQ pid=6266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:19.115565 systemd-logind[1585]: New session 47 of user core. Jan 24 01:02:19.084000 audit[6266]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5d8ca440 a2=3 a3=0 items=0 ppid=1 pid=6266 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=47 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:02:19.143147 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 24 01:02:19.152004 kernel: audit: type=1006 audit(1769216539.084:1107): pid=6266 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=47 res=1 Jan 24 01:02:19.152080 kernel: audit: type=1300 audit(1769216539.084:1107): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5d8ca440 a2=3 a3=0 items=0 ppid=1 pid=6266 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=47 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:02:19.159077 kernel: audit: type=1327 audit(1769216539.084:1107): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:02:19.084000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:02:19.158000 audit[6266]: USER_START pid=6266 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:19.189513 kernel: audit: type=1105 audit(1769216539.158:1108): pid=6266 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:19.189633 kernel: audit: type=1103 audit(1769216539.164:1109): pid=6270 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:19.164000 audit[6270]: CRED_ACQ pid=6270 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:19.538502 sshd[6270]: Connection closed by 10.0.0.1 port 53786 Jan 24 01:02:19.543844 sshd-session[6266]: pam_unix(sshd:session): session closed for user core Jan 24 01:02:19.550000 audit[6266]: USER_END pid=6266 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:19.561923 systemd[1]: sshd@45-10.0.0.104:22-10.0.0.1:53786.service: Deactivated successfully. Jan 24 01:02:19.575240 systemd[1]: session-47.scope: Deactivated successfully. Jan 24 01:02:19.550000 audit[6266]: CRED_DISP pid=6266 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:19.584078 systemd-logind[1585]: Session 47 logged out. Waiting for processes to exit. Jan 24 01:02:19.586884 systemd-logind[1585]: Removed session 47. Jan 24 01:02:19.602835 kernel: audit: type=1106 audit(1769216539.550:1110): pid=6266 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:19.603079 kernel: audit: type=1104 audit(1769216539.550:1111): pid=6266 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:19.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.104:22-10.0.0.1:53786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:02:19.969407 kubelet[2883]: E0124 01:02:19.968665 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b9dc86db-sl4tg" podUID="70bde68b-f37d-4bad-bf48-1635753f011a" Jan 24 01:02:20.968833 kubelet[2883]: E0124 01:02:20.968054 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5985c58466-q852p" podUID="6f07ac71-f9bf-4f16-8022-eeee9f625fbd" Jan 24 01:02:21.098528 containerd[1609]: time="2026-01-24T01:02:21.098348492Z" level=info msg="container event discarded" container=4b510ba95a7a7af045beea48384e971bbd66774418a44a25b020997c4cb926c6 type=CONTAINER_CREATED_EVENT Jan 24 01:02:21.475125 containerd[1609]: time="2026-01-24T01:02:21.475031043Z" level=info msg="container event discarded" container=4b510ba95a7a7af045beea48384e971bbd66774418a44a25b020997c4cb926c6 type=CONTAINER_STARTED_EVENT Jan 24 01:02:21.997089 kubelet[2883]: E0124 01:02:21.995890 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkd9m" podUID="e6e0379d-4209-43c1-9c94-53533c368367" Jan 24 01:02:23.590820 containerd[1609]: time="2026-01-24T01:02:23.589853693Z" level=info msg="container event discarded" container=aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311 type=CONTAINER_CREATED_EVENT Jan 24 01:02:23.590820 containerd[1609]: time="2026-01-24T01:02:23.589910148Z" level=info msg="container event discarded" container=aa39a219b8110e4c3e7560143477396efb94c343641ceb1b71096aa53de11311 type=CONTAINER_STARTED_EVENT Jan 24 01:02:23.693047 containerd[1609]: time="2026-01-24T01:02:23.692118649Z" level=info msg="container event discarded" container=c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a type=CONTAINER_CREATED_EVENT Jan 24 01:02:23.693047 containerd[1609]: time="2026-01-24T01:02:23.692178791Z" level=info msg="container event discarded" container=c1f7a028370fd96abb5faca633ee55b85d1079240da5da1fe9f659b3ecd8723a type=CONTAINER_STARTED_EVENT Jan 24 01:02:23.726362 containerd[1609]: time="2026-01-24T01:02:23.725846951Z" level=info msg="container event discarded" container=346ee64a66fe5d3e81e9967000a670c409b32bdc2b312edf47e2d4d4f094f050 type=CONTAINER_CREATED_EVENT Jan 24 01:02:24.211910 containerd[1609]: time="2026-01-24T01:02:24.208136865Z" level=info msg="container event discarded" container=346ee64a66fe5d3e81e9967000a670c409b32bdc2b312edf47e2d4d4f094f050 type=CONTAINER_STARTED_EVENT Jan 24 01:02:24.507380 containerd[1609]: time="2026-01-24T01:02:24.506818385Z" level=info msg="container event discarded" container=8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd type=CONTAINER_CREATED_EVENT Jan 24 01:02:24.507380 containerd[1609]: time="2026-01-24T01:02:24.507064994Z" level=info msg="container event discarded" container=8f00611d8eae0a3fd59db3d2793c83f296215377c8d434b2823398631ed5ecdd type=CONTAINER_STARTED_EVENT Jan 24 01:02:24.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-10.0.0.104:22-10.0.0.1:43978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:02:24.581797 systemd[1]: Started sshd@46-10.0.0.104:22-10.0.0.1:43978.service - OpenSSH per-connection server daemon (10.0.0.1:43978). Jan 24 01:02:24.589009 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:02:24.589119 kernel: audit: type=1130 audit(1769216544.583:1113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-10.0.0.104:22-10.0.0.1:43978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:02:24.972806 kernel: audit: type=1101 audit(1769216544.931:1114): pid=6307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:24.931000 audit[6307]: USER_ACCT pid=6307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:24.973698 sshd[6307]: Accepted publickey for core from 10.0.0.1 port 43978 ssh2: RSA SHA256:Z5oECdOkQaQvqVZlCPpqe/zdZdPFQRrfgVxZLrQ/kiA Jan 24 01:02:24.989863 sshd-session[6307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:02:24.983000 audit[6307]: CRED_ACQ pid=6307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:25.029811 kernel: audit: type=1103 audit(1769216544.983:1115): pid=6307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:24.983000 audit[6307]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd4309260 a2=3 a3=0 items=0 ppid=1 pid=6307 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=48 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:02:25.070018 kernel: audit: type=1006 audit(1769216544.983:1116): pid=6307 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=48 res=1 Jan 24 01:02:25.071110 kernel: audit: type=1300 audit(1769216544.983:1116): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd4309260 a2=3 a3=0 items=0 ppid=1 pid=6307 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=48 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:02:25.071179 kernel: audit: type=1327 audit(1769216544.983:1116): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:02:24.983000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:02:25.096107 systemd-logind[1585]: New session 48 of user core. Jan 24 01:02:25.101319 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 24 01:02:25.120000 audit[6307]: USER_START pid=6307 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:25.169234 kernel: audit: type=1105 audit(1769216545.120:1117): pid=6307 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:25.122000 audit[6311]: CRED_ACQ pid=6311 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:25.196624 kernel: audit: type=1103 audit(1769216545.122:1118): pid=6311 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:25.530108 sshd[6311]: Connection closed by 10.0.0.1 port 43978 Jan 24 01:02:25.528853 sshd-session[6307]: pam_unix(sshd:session): session closed for user core Jan 24 01:02:25.534000 audit[6307]: USER_END pid=6307 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:25.562158 systemd-logind[1585]: Session 48 logged out. Waiting for processes to exit. Jan 24 01:02:25.570688 systemd[1]: sshd@46-10.0.0.104:22-10.0.0.1:43978.service: Deactivated successfully. Jan 24 01:02:25.585314 systemd[1]: session-48.scope: Deactivated successfully. Jan 24 01:02:25.591500 systemd-logind[1585]: Removed session 48. Jan 24 01:02:25.613886 kernel: audit: type=1106 audit(1769216545.534:1119): pid=6307 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:25.614077 kernel: audit: type=1104 audit(1769216545.535:1120): pid=6307 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:25.535000 audit[6307]: CRED_DISP pid=6307 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:02:25.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-10.0.0.104:22-10.0.0.1:43978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:02:25.885997 containerd[1609]: time="2026-01-24T01:02:25.884641211Z" level=info msg="container event discarded" container=15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b type=CONTAINER_CREATED_EVENT Jan 24 01:02:25.885997 containerd[1609]: time="2026-01-24T01:02:25.885104431Z" level=info msg="container event discarded" container=15660e7103b880c6f0b69fbb8a3422a76bc2be0e5b94fe2405c0b20a6b14663b type=CONTAINER_STARTED_EVENT Jan 24 01:02:25.976612 kubelet[2883]: E0124 01:02:25.974877 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2256s" podUID="9fdbb8ee-a6f4-499c-b584-8b75c3240604" Jan 24 01:02:26.977475 kubelet[2883]: E0124 01:02:26.976628 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58d88bd994-v27xr" podUID="ae809202-0be0-4f65-b3c1-0018455a5691"