Jan 28 01:29:52.468581 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 22:27:36 -00 2026 Jan 28 01:29:52.468709 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=230bd1064bb7a0684ba668d5ea4a2f2b71590a34541eb9a6374c5e871e67bba4 Jan 28 01:29:52.468725 kernel: BIOS-provided physical RAM map: Jan 28 01:29:52.468743 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 28 01:29:52.468751 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 28 01:29:52.468760 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 28 01:29:52.468770 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 28 01:29:52.468780 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 28 01:29:52.468790 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 28 01:29:52.468801 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 28 01:29:52.468809 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 28 01:29:52.468823 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 28 01:29:52.468833 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 28 01:29:52.468842 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 28 01:29:52.468856 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 28 01:29:52.468865 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 28 01:29:52.468879 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 28 01:29:52.468889 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 28 01:29:52.468901 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 28 01:29:52.468910 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 28 01:29:52.468919 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 28 01:29:52.468929 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 28 01:29:52.468939 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 28 01:29:52.468950 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 28 01:29:52.468961 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 28 01:29:52.468970 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 28 01:29:52.468984 kernel: NX (Execute Disable) protection: active Jan 28 01:29:52.468995 kernel: APIC: Static calls initialized Jan 28 01:29:52.469006 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 28 01:29:52.469017 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 28 01:29:52.469026 kernel: extended physical RAM map: Jan 28 01:29:52.469036 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 28 01:29:52.469045 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 28 01:29:52.469057 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 28 01:29:52.469067 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 28 01:29:52.469076 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 28 01:29:52.469087 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 28 01:29:52.469103 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 28 01:29:52.471922 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 28 01:29:52.471936 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 28 01:29:52.471956 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 28 01:29:52.471970 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 28 01:29:52.471981 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 28 01:29:52.471991 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 28 01:29:52.472003 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 28 01:29:52.472013 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 28 01:29:52.472024 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 28 01:29:52.472035 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 28 01:29:52.472045 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 28 01:29:52.472056 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 28 01:29:52.472070 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 28 01:29:52.472080 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 28 01:29:52.472091 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 28 01:29:52.472101 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 28 01:29:52.472173 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 28 01:29:52.472184 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 28 01:29:52.472195 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 28 01:29:52.472205 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 28 01:29:52.472216 kernel: efi: EFI v2.7 by EDK II Jan 28 01:29:52.472227 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 28 01:29:52.472237 kernel: random: crng init done Jan 28 01:29:52.472252 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 28 01:29:52.472262 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 28 01:29:52.472273 kernel: secureboot: Secure boot disabled Jan 28 01:29:52.472284 kernel: SMBIOS 2.8 present. Jan 28 01:29:52.472373 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 28 01:29:52.472384 kernel: DMI: Memory slots populated: 1/1 Jan 28 01:29:52.472394 kernel: Hypervisor detected: KVM Jan 28 01:29:52.472406 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 28 01:29:52.472419 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 28 01:29:52.472428 kernel: kvm-clock: using sched offset of 31706682117 cycles Jan 28 01:29:52.472439 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 01:29:52.472454 kernel: tsc: Detected 2445.426 MHz processor Jan 28 01:29:52.472464 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 28 01:29:52.472477 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 28 01:29:52.472489 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 28 01:29:52.472500 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 28 01:29:52.472510 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 28 01:29:52.472520 kernel: Using GB pages for direct mapping Jan 28 01:29:52.472535 kernel: ACPI: Early table checksum verification disabled Jan 28 01:29:52.472546 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 28 01:29:52.472557 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 28 01:29:52.472569 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:29:52.472580 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:29:52.472591 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 28 01:29:52.472602 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:29:52.472617 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:29:52.472628 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:29:52.472639 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:29:52.472650 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 28 01:29:52.472661 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 28 01:29:52.472672 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 28 01:29:52.472683 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 28 01:29:52.472697 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 28 01:29:52.472708 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 28 01:29:52.472720 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 28 01:29:52.472731 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 28 01:29:52.472742 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 28 01:29:52.472753 kernel: No NUMA configuration found Jan 28 01:29:52.472764 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 28 01:29:52.472778 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 28 01:29:52.472789 kernel: Zone ranges: Jan 28 01:29:52.472800 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 28 01:29:52.472811 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 28 01:29:52.472822 kernel: Normal empty Jan 28 01:29:52.472833 kernel: Device empty Jan 28 01:29:52.472844 kernel: Movable zone start for each node Jan 28 01:29:52.472855 kernel: Early memory node ranges Jan 28 01:29:52.472869 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 28 01:29:52.472879 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 28 01:29:52.472890 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 28 01:29:52.472901 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 28 01:29:52.472912 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 28 01:29:52.472923 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 28 01:29:52.472934 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 28 01:29:52.472947 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 28 01:29:52.472958 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 28 01:29:52.472970 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 01:29:52.472990 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 28 01:29:52.473005 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 28 01:29:52.473016 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 01:29:52.473027 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 28 01:29:52.473039 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 28 01:29:52.473050 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 28 01:29:52.473062 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 28 01:29:52.473076 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 28 01:29:52.473088 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 28 01:29:52.473099 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 28 01:29:52.473171 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 28 01:29:52.473187 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 28 01:29:52.473199 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 28 01:29:52.473210 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 28 01:29:52.473222 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 28 01:29:52.473233 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 28 01:29:52.473245 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 28 01:29:52.473257 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 28 01:29:52.473271 kernel: TSC deadline timer available Jan 28 01:29:52.473282 kernel: CPU topo: Max. logical packages: 1 Jan 28 01:29:52.473371 kernel: CPU topo: Max. logical dies: 1 Jan 28 01:29:52.473383 kernel: CPU topo: Max. dies per package: 1 Jan 28 01:29:52.473395 kernel: CPU topo: Max. threads per core: 1 Jan 28 01:29:52.473408 kernel: CPU topo: Num. cores per package: 4 Jan 28 01:29:52.473420 kernel: CPU topo: Num. threads per package: 4 Jan 28 01:29:52.473434 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 28 01:29:52.473445 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 28 01:29:52.473456 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 28 01:29:52.473468 kernel: kvm-guest: setup PV sched yield Jan 28 01:29:52.473480 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 28 01:29:52.473491 kernel: Booting paravirtualized kernel on KVM Jan 28 01:29:52.473502 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 28 01:29:52.473517 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 28 01:29:52.473531 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 28 01:29:52.473548 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 28 01:29:52.473558 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 28 01:29:52.473569 kernel: kvm-guest: PV spinlocks enabled Jan 28 01:29:52.473582 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 28 01:29:52.473595 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=230bd1064bb7a0684ba668d5ea4a2f2b71590a34541eb9a6374c5e871e67bba4 Jan 28 01:29:52.473611 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 01:29:52.473623 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 01:29:52.473635 kernel: Fallback order for Node 0: 0 Jan 28 01:29:52.473646 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 28 01:29:52.473658 kernel: Policy zone: DMA32 Jan 28 01:29:52.473669 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 01:29:52.473681 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 28 01:29:52.473695 kernel: ftrace: allocating 40097 entries in 157 pages Jan 28 01:29:52.473707 kernel: ftrace: allocated 157 pages with 5 groups Jan 28 01:29:52.473718 kernel: Dynamic Preempt: voluntary Jan 28 01:29:52.473730 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 01:29:52.473742 kernel: rcu: RCU event tracing is enabled. Jan 28 01:29:52.473754 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 28 01:29:52.473766 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 01:29:52.473781 kernel: Rude variant of Tasks RCU enabled. Jan 28 01:29:52.473792 kernel: Tracing variant of Tasks RCU enabled. Jan 28 01:29:52.473804 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 01:29:52.473815 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 28 01:29:52.473827 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:29:52.473839 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:29:52.473851 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:29:52.473866 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 28 01:29:52.473878 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 01:29:52.473889 kernel: Console: colour dummy device 80x25 Jan 28 01:29:52.473900 kernel: printk: legacy console [ttyS0] enabled Jan 28 01:29:52.473912 kernel: ACPI: Core revision 20240827 Jan 28 01:29:52.473924 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 28 01:29:52.473935 kernel: APIC: Switch to symmetric I/O mode setup Jan 28 01:29:52.473950 kernel: x2apic enabled Jan 28 01:29:52.473961 kernel: APIC: Switched APIC routing to: physical x2apic Jan 28 01:29:52.473973 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 28 01:29:52.473985 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 28 01:29:52.473996 kernel: kvm-guest: setup PV IPIs Jan 28 01:29:52.474008 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 28 01:29:52.474019 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 28 01:29:52.474035 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 28 01:29:52.474046 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 28 01:29:52.474057 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 28 01:29:52.474069 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 28 01:29:52.474081 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 28 01:29:52.474093 kernel: Spectre V2 : Mitigation: Retpolines Jan 28 01:29:52.474105 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 28 01:29:52.474184 kernel: Speculative Store Bypass: Vulnerable Jan 28 01:29:52.474197 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 28 01:29:52.474211 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 28 01:29:52.474224 kernel: active return thunk: srso_alias_return_thunk Jan 28 01:29:52.474234 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 28 01:29:52.474245 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 28 01:29:52.474256 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 28 01:29:52.474272 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 28 01:29:52.474284 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 28 01:29:52.474388 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 28 01:29:52.474399 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 28 01:29:52.474410 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 28 01:29:52.474421 kernel: Freeing SMP alternatives memory: 32K Jan 28 01:29:52.474434 kernel: pid_max: default: 32768 minimum: 301 Jan 28 01:29:52.477262 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 28 01:29:52.477277 kernel: landlock: Up and running. Jan 28 01:29:52.477385 kernel: SELinux: Initializing. Jan 28 01:29:52.477402 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:29:52.477415 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:29:52.477426 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 28 01:29:52.477437 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 28 01:29:52.477455 kernel: signal: max sigframe size: 1776 Jan 28 01:29:52.477468 kernel: rcu: Hierarchical SRCU implementation. Jan 28 01:29:52.477483 kernel: rcu: Max phase no-delay instances is 400. Jan 28 01:29:52.477494 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 28 01:29:52.477505 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 28 01:29:52.477516 kernel: smp: Bringing up secondary CPUs ... Jan 28 01:29:52.477527 kernel: smpboot: x86: Booting SMP configuration: Jan 28 01:29:52.477547 kernel: .... node #0, CPUs: #1 #2 #3 Jan 28 01:29:52.477558 kernel: smp: Brought up 1 node, 4 CPUs Jan 28 01:29:52.477569 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 28 01:29:52.477581 kernel: Memory: 2439052K/2565800K available (14336K kernel code, 2445K rwdata, 31636K rodata, 15536K init, 2504K bss, 120812K reserved, 0K cma-reserved) Jan 28 01:29:52.477594 kernel: devtmpfs: initialized Jan 28 01:29:52.477606 kernel: x86/mm: Memory block size: 128MB Jan 28 01:29:52.477617 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 28 01:29:52.477634 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 28 01:29:52.477647 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 28 01:29:52.477658 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 28 01:29:52.477671 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 28 01:29:52.477684 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 28 01:29:52.477694 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 01:29:52.477707 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 28 01:29:52.477723 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 01:29:52.477734 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 01:29:52.477747 kernel: audit: initializing netlink subsys (disabled) Jan 28 01:29:52.477760 kernel: audit: type=2000 audit(1769563757.861:1): state=initialized audit_enabled=0 res=1 Jan 28 01:29:52.477771 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 01:29:52.477783 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 28 01:29:52.477796 kernel: cpuidle: using governor menu Jan 28 01:29:52.477810 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 01:29:52.477824 kernel: dca service started, version 1.12.1 Jan 28 01:29:52.477836 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 28 01:29:52.477847 kernel: PCI: Using configuration type 1 for base access Jan 28 01:29:52.477861 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 28 01:29:52.477873 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 01:29:52.477883 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 01:29:52.477900 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 01:29:52.477912 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 01:29:52.477923 kernel: ACPI: Added _OSI(Module Device) Jan 28 01:29:52.477936 kernel: ACPI: Added _OSI(Processor Device) Jan 28 01:29:52.477948 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 01:29:52.477959 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 01:29:52.477972 kernel: ACPI: Interpreter enabled Jan 28 01:29:52.477987 kernel: ACPI: PM: (supports S0 S3 S5) Jan 28 01:29:52.477998 kernel: ACPI: Using IOAPIC for interrupt routing Jan 28 01:29:52.478012 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 28 01:29:52.478024 kernel: PCI: Using E820 reservations for host bridge windows Jan 28 01:29:52.478035 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 28 01:29:52.478047 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 28 01:29:52.481636 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 28 01:29:52.481928 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 28 01:29:52.484656 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 28 01:29:52.484680 kernel: PCI host bridge to bus 0000:00 Jan 28 01:29:52.484942 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 28 01:29:52.485255 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 28 01:29:52.485682 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 28 01:29:52.485915 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 28 01:29:52.490229 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 28 01:29:52.492034 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 28 01:29:52.492516 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 28 01:29:52.492805 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 28 01:29:52.493084 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 28 01:29:52.551977 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 28 01:29:52.552509 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 28 01:29:52.552775 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 28 01:29:52.553043 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 28 01:29:52.572274 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 15625 usecs Jan 28 01:29:52.572737 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 28 01:29:52.573002 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 28 01:29:52.578424 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 28 01:29:52.578698 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 28 01:29:52.578976 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 28 01:29:52.584938 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 28 01:29:52.585274 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 28 01:29:52.585644 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 28 01:29:52.585911 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 28 01:29:52.588392 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 28 01:29:52.588638 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 28 01:29:52.588881 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 28 01:29:52.589192 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 28 01:29:52.589584 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 28 01:29:52.589834 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 28 01:29:52.590089 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 28 01:29:52.593661 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 28 01:29:52.593914 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 28 01:29:52.594245 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 28 01:29:52.594613 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 28 01:29:52.594635 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 28 01:29:52.594650 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 28 01:29:52.594675 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 28 01:29:52.594686 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 28 01:29:52.594697 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 28 01:29:52.594708 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 28 01:29:52.594719 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 28 01:29:52.594732 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 28 01:29:52.594746 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 28 01:29:52.594763 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 28 01:29:52.594775 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 28 01:29:52.594787 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 28 01:29:52.594799 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 28 01:29:52.594812 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 28 01:29:52.594824 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 28 01:29:52.594836 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 28 01:29:52.594851 kernel: iommu: Default domain type: Translated Jan 28 01:29:52.594864 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 28 01:29:52.594876 kernel: efivars: Registered efivars operations Jan 28 01:29:52.594888 kernel: PCI: Using ACPI for IRQ routing Jan 28 01:29:52.594900 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 28 01:29:52.594913 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 28 01:29:52.594924 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 28 01:29:52.594936 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 28 01:29:52.595016 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 28 01:29:52.595030 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 28 01:29:52.595043 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 28 01:29:52.595054 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 28 01:29:52.595064 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 28 01:29:52.595555 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 28 01:29:52.596069 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 28 01:29:52.596472 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 28 01:29:52.596495 kernel: vgaarb: loaded Jan 28 01:29:52.596508 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 28 01:29:52.596520 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 28 01:29:52.596531 kernel: clocksource: Switched to clocksource kvm-clock Jan 28 01:29:52.596541 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 01:29:52.596561 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 01:29:52.596574 kernel: pnp: PnP ACPI init Jan 28 01:29:52.596824 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 28 01:29:52.596843 kernel: pnp: PnP ACPI: found 6 devices Jan 28 01:29:52.596857 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 28 01:29:52.596870 kernel: NET: Registered PF_INET protocol family Jan 28 01:29:52.596885 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 01:29:52.596901 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 01:29:52.596914 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 01:29:52.597104 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 01:29:52.597229 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 01:29:52.597242 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 01:29:52.597255 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:29:52.597267 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:29:52.597400 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 01:29:52.597412 kernel: NET: Registered PF_XDP protocol family Jan 28 01:29:52.597658 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 28 01:29:52.597903 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 28 01:29:52.598218 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 28 01:29:52.598570 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 28 01:29:52.598856 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 28 01:29:52.599077 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 28 01:29:52.599669 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 28 01:29:52.599900 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 28 01:29:52.599918 kernel: PCI: CLS 0 bytes, default 64 Jan 28 01:29:52.599933 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 28 01:29:52.599945 kernel: Initialise system trusted keyrings Jan 28 01:29:52.600034 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 01:29:52.600046 kernel: Key type asymmetric registered Jan 28 01:29:52.600058 kernel: Asymmetric key parser 'x509' registered Jan 28 01:29:52.600071 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 28 01:29:52.600085 kernel: io scheduler mq-deadline registered Jan 28 01:29:52.600097 kernel: io scheduler kyber registered Jan 28 01:29:52.600169 kernel: io scheduler bfq registered Jan 28 01:29:52.600242 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 28 01:29:52.600257 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 28 01:29:52.600270 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 28 01:29:52.600409 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 28 01:29:52.600422 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 01:29:52.600496 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 28 01:29:52.600509 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 28 01:29:52.600522 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 28 01:29:52.600536 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 28 01:29:52.600801 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 28 01:29:52.600826 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 28 01:29:52.600908 kernel: hrtimer: interrupt took 4685708 ns Jan 28 01:29:52.601209 kernel: rtc_cmos 00:04: registered as rtc0 Jan 28 01:29:52.601564 kernel: rtc_cmos 00:04: setting system clock to 2026-01-28T01:29:40 UTC (1769563780) Jan 28 01:29:52.601807 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 28 01:29:52.601827 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 28 01:29:52.601839 kernel: efifb: probing for efifb Jan 28 01:29:52.601854 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 28 01:29:52.601931 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 28 01:29:52.601943 kernel: efifb: scrolling: redraw Jan 28 01:29:52.601955 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 28 01:29:52.601968 kernel: Console: switching to colour frame buffer device 160x50 Jan 28 01:29:52.601980 kernel: fb0: EFI VGA frame buffer device Jan 28 01:29:52.601994 kernel: pstore: Using crash dump compression: deflate Jan 28 01:29:52.602010 kernel: pstore: Registered efi_pstore as persistent store backend Jan 28 01:29:52.602085 kernel: NET: Registered PF_INET6 protocol family Jan 28 01:29:52.602101 kernel: Segment Routing with IPv6 Jan 28 01:29:52.602189 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 01:29:52.602202 kernel: NET: Registered PF_PACKET protocol family Jan 28 01:29:52.602215 kernel: Key type dns_resolver registered Jan 28 01:29:52.602229 kernel: IPI shorthand broadcast: enabled Jan 28 01:29:52.602240 kernel: sched_clock: Marking stable (16585041232, 4760952070)->(25287045314, -3941052012) Jan 28 01:29:52.602402 kernel: registered taskstats version 1 Jan 28 01:29:52.602416 kernel: Loading compiled-in X.509 certificates Jan 28 01:29:52.602428 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: e20b9b58c2206ebaa16c4a71db244a0e01a2e623' Jan 28 01:29:52.602441 kernel: Demotion targets for Node 0: null Jan 28 01:29:52.602456 kernel: Key type .fscrypt registered Jan 28 01:29:52.602467 kernel: Key type fscrypt-provisioning registered Jan 28 01:29:52.602478 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 01:29:52.602554 kernel: ima: Allocated hash algorithm: sha1 Jan 28 01:29:52.602567 kernel: ima: No architecture policies found Jan 28 01:29:52.602579 kernel: clk: Disabling unused clocks Jan 28 01:29:52.602591 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 28 01:29:52.602604 kernel: Write protecting the kernel read-only data: 47104k Jan 28 01:29:52.602616 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Jan 28 01:29:52.602628 kernel: Run /init as init process Jan 28 01:29:52.602705 kernel: with arguments: Jan 28 01:29:52.602718 kernel: /init Jan 28 01:29:52.602729 kernel: with environment: Jan 28 01:29:52.602741 kernel: HOME=/ Jan 28 01:29:52.602753 kernel: TERM=linux Jan 28 01:29:52.602765 kernel: SCSI subsystem initialized Jan 28 01:29:52.602778 kernel: libata version 3.00 loaded. Jan 28 01:29:52.603040 kernel: ahci 0000:00:1f.2: version 3.0 Jan 28 01:29:52.604385 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 28 01:29:52.604638 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 28 01:29:52.604892 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 28 01:29:52.605209 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 28 01:29:52.605666 kernel: scsi host0: ahci Jan 28 01:29:52.605942 kernel: scsi host1: ahci Jan 28 01:29:52.606481 kernel: scsi host2: ahci Jan 28 01:29:52.606750 kernel: scsi host3: ahci Jan 28 01:29:52.607021 kernel: scsi host4: ahci Jan 28 01:29:52.607474 kernel: scsi host5: ahci Jan 28 01:29:52.607497 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Jan 28 01:29:52.607579 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Jan 28 01:29:52.607596 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Jan 28 01:29:52.607607 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Jan 28 01:29:52.607618 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Jan 28 01:29:52.607630 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Jan 28 01:29:52.607642 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 28 01:29:52.607657 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 28 01:29:52.607732 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 28 01:29:52.607744 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 28 01:29:52.607755 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 28 01:29:52.607766 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 28 01:29:52.607779 kernel: ata3.00: LPM support broken, forcing max_power Jan 28 01:29:52.607793 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 28 01:29:52.607807 kernel: ata3.00: applying bridge limits Jan 28 01:29:52.607881 kernel: ata3.00: LPM support broken, forcing max_power Jan 28 01:29:52.607897 kernel: ata3.00: configured for UDMA/100 Jan 28 01:29:52.608433 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 28 01:29:52.608777 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 28 01:29:52.609032 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 28 01:29:52.609459 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 28 01:29:52.609549 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 01:29:52.609563 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 28 01:29:52.609576 kernel: GPT:16515071 != 27000831 Jan 28 01:29:52.609588 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 28 01:29:52.609600 kernel: GPT:16515071 != 27000831 Jan 28 01:29:52.609611 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 01:29:52.609623 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 01:29:52.609966 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 28 01:29:52.609995 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 01:29:52.610007 kernel: device-mapper: uevent: version 1.0.3 Jan 28 01:29:52.610018 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 28 01:29:52.610030 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 28 01:29:52.610044 kernel: raid6: avx2x4 gen() 10916 MB/s Jan 28 01:29:52.610058 kernel: raid6: avx2x2 gen() 19542 MB/s Jan 28 01:29:52.610191 kernel: raid6: avx2x1 gen() 12197 MB/s Jan 28 01:29:52.610204 kernel: raid6: using algorithm avx2x2 gen() 19542 MB/s Jan 28 01:29:52.610216 kernel: raid6: .... xor() 14389 MB/s, rmw enabled Jan 28 01:29:52.610227 kernel: raid6: using avx2x2 recovery algorithm Jan 28 01:29:52.610240 kernel: xor: automatically using best checksumming function avx Jan 28 01:29:52.610252 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 01:29:52.610264 kernel: BTRFS: device fsid 34b0c34a-a205-4a5e-b928-fc41d62e7a91 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (181) Jan 28 01:29:52.610414 kernel: BTRFS info (device dm-0): first mount of filesystem 34b0c34a-a205-4a5e-b928-fc41d62e7a91 Jan 28 01:29:52.610427 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:29:52.610439 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 01:29:52.610451 kernel: BTRFS info (device dm-0): enabling free space tree Jan 28 01:29:52.610462 kernel: loop: module loaded Jan 28 01:29:52.610474 kernel: loop0: detected capacity change from 0 to 100552 Jan 28 01:29:52.610488 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 01:29:52.610554 systemd[1]: Successfully made /usr/ read-only. Jan 28 01:29:52.610573 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 01:29:52.610586 systemd[1]: Detected virtualization kvm. Jan 28 01:29:52.610597 systemd[1]: Detected architecture x86-64. Jan 28 01:29:52.610608 systemd[1]: Running in initrd. Jan 28 01:29:52.610621 systemd[1]: No hostname configured, using default hostname. Jan 28 01:29:52.610688 systemd[1]: Hostname set to . Jan 28 01:29:52.610702 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 28 01:29:52.610717 systemd[1]: Queued start job for default target initrd.target. Jan 28 01:29:52.610728 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 28 01:29:52.610740 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:29:52.610752 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:29:52.610831 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 01:29:52.610847 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:29:52.610861 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 01:29:52.610876 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 01:29:52.610891 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:29:52.610903 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:29:52.610991 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 28 01:29:52.611004 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:29:52.611016 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:29:52.611029 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:29:52.611044 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:29:52.611057 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:29:52.611068 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:29:52.611196 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 28 01:29:52.611211 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 01:29:52.611225 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 28 01:29:52.611242 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:29:52.611254 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:29:52.611266 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:29:52.611403 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:29:52.611417 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 01:29:52.611432 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 01:29:52.611445 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:29:52.611456 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 01:29:52.611469 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 28 01:29:52.611482 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 01:29:52.611561 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:29:52.611633 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:29:52.611648 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:29:52.611661 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 01:29:52.611737 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:29:52.611750 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 01:29:52.611764 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:29:52.611820 systemd-journald[322]: Collecting audit messages is enabled. Jan 28 01:29:52.611913 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:29:52.611929 kernel: audit: type=1130 audit(1769563792.529:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:52.611944 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:29:52.611959 kernel: audit: type=1130 audit(1769563792.594:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:52.611975 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:29:52.612045 systemd-journald[322]: Journal started Jan 28 01:29:52.612073 systemd-journald[322]: Runtime Journal (/run/log/journal/d6d3ccaa41cf4edb9386aff108df043d) is 6M, max 48M, 42M free. Jan 28 01:29:52.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:52.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:52.785170 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 01:29:52.844402 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:29:52.844479 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:29:52.868402 kernel: Bridge firewalling registered Jan 28 01:29:52.868471 kernel: audit: type=1130 audit(1769563792.856:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:52.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:52.919881 systemd-modules-load[323]: Inserted module 'br_netfilter' Jan 28 01:29:52.933513 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:29:52.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:52.967760 kernel: audit: type=1130 audit(1769563792.935:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:52.972469 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:29:53.004646 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:29:53.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:53.091731 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:29:53.152801 kernel: audit: type=1130 audit(1769563793.088:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:53.341882 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:29:53.455843 kernel: audit: type=1130 audit(1769563793.351:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:53.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:53.370960 systemd-tmpfiles[346]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 28 01:29:53.385578 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 01:29:53.560105 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:29:53.679389 kernel: audit: type=1130 audit(1769563793.589:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:53.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:53.685824 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:29:53.847466 kernel: audit: type=1130 audit(1769563793.710:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:53.847517 kernel: audit: type=1334 audit(1769563793.726:10): prog-id=6 op=LOAD Jan 28 01:29:53.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:53.726000 audit: BPF prog-id=6 op=LOAD Jan 28 01:29:53.728058 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:29:53.891181 dracut-cmdline[358]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=230bd1064bb7a0684ba668d5ea4a2f2b71590a34541eb9a6374c5e871e67bba4 Jan 28 01:29:54.376227 systemd-resolved[362]: Positive Trust Anchors: Jan 28 01:29:54.393182 systemd-resolved[362]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:29:54.393196 systemd-resolved[362]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 28 01:29:54.449077 systemd-resolved[362]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:29:54.945917 systemd-resolved[362]: Defaulting to hostname 'linux'. Jan 28 01:29:54.969430 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:29:55.093261 kernel: audit: type=1130 audit(1769563795.004:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:55.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:55.008042 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:29:55.552792 kernel: Loading iSCSI transport class v2.0-870. Jan 28 01:29:55.703515 kernel: iscsi: registered transport (tcp) Jan 28 01:29:55.833499 kernel: iscsi: registered transport (qla4xxx) Jan 28 01:29:55.833569 kernel: QLogic iSCSI HBA Driver Jan 28 01:29:56.123775 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 01:29:56.309642 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:29:56.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:56.353533 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 01:29:56.851006 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 01:29:56.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:56.883600 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 01:29:57.003451 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 01:29:57.313989 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:29:57.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:57.349000 audit: BPF prog-id=7 op=LOAD Jan 28 01:29:57.349000 audit: BPF prog-id=8 op=LOAD Jan 28 01:29:57.355620 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:29:57.615021 systemd-udevd[584]: Using default interface naming scheme 'v257'. Jan 28 01:29:57.783638 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:29:57.910495 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 28 01:29:57.910586 kernel: audit: type=1130 audit(1769563797.795:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:57.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:57.802559 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 01:29:58.061707 dracut-pre-trigger[635]: rd.md=0: removing MD RAID activation Jan 28 01:29:58.268244 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:29:58.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:58.357235 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:29:58.470990 kernel: audit: type=1130 audit(1769563798.334:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:58.471024 kernel: audit: type=1334 audit(1769563798.348:19): prog-id=9 op=LOAD Jan 28 01:29:58.471042 kernel: audit: type=1130 audit(1769563798.423:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:58.348000 audit: BPF prog-id=9 op=LOAD Jan 28 01:29:58.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:58.400714 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:29:58.435617 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:29:58.903394 systemd-networkd[732]: lo: Link UP Jan 28 01:29:58.903409 systemd-networkd[732]: lo: Gained carrier Jan 28 01:29:59.037919 kernel: audit: type=1130 audit(1769563798.969:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:58.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:58.906711 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:29:58.971082 systemd[1]: Reached target network.target - Network. Jan 28 01:29:59.395032 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:29:59.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:59.569105 kernel: audit: type=1130 audit(1769563799.471:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:59.584704 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 01:30:00.061926 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 28 01:30:00.133066 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 01:30:00.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:00.305560 kernel: audit: type=1130 audit(1769563800.203:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:00.492837 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 28 01:30:00.663840 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 28 01:30:00.753400 kernel: cryptd: max_cpu_qlen set to 1000 Jan 28 01:30:00.767277 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 01:30:00.886129 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:30:00.910946 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:30:00.983034 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:30:01.022012 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 01:30:01.128677 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 01:30:01.157581 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:30:01.312515 kernel: audit: type=1131 audit(1769563801.236:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:01.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:01.158959 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:30:01.238117 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:30:01.338956 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:30:01.480055 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:30:01.531934 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:30:01.561093 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:30:01.784918 kernel: audit: type=1130 audit(1769563801.529:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:01.785054 kernel: audit: type=1130 audit(1769563801.638:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:01.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:01.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:01.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:01.785514 disk-uuid[787]: Primary Header is updated. Jan 28 01:30:01.785514 disk-uuid[787]: Secondary Entries is updated. Jan 28 01:30:01.785514 disk-uuid[787]: Secondary Header is updated. Jan 28 01:30:01.707670 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:30:02.351448 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:30:02.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:02.557986 systemd-networkd[732]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:30:02.636784 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 28 01:30:02.558003 systemd-networkd[732]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:30:02.611960 systemd-networkd[732]: eth0: Link UP Jan 28 01:30:02.635902 systemd-networkd[732]: eth0: Gained carrier Jan 28 01:30:02.635926 systemd-networkd[732]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:30:02.724099 kernel: AES CTR mode by8 optimization enabled Jan 28 01:30:02.749222 systemd-networkd[732]: eth0: DHCPv4 address 10.0.0.88/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 01:30:03.347782 disk-uuid[788]: Warning: The kernel is still using the old partition table. Jan 28 01:30:03.347782 disk-uuid[788]: The new table will be used at the next reboot or after you Jan 28 01:30:03.347782 disk-uuid[788]: run partprobe(8) or kpartx(8) Jan 28 01:30:03.347782 disk-uuid[788]: The operation has completed successfully. Jan 28 01:30:03.514946 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 01:30:03.526256 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 01:30:03.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:03.626262 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 28 01:30:03.626432 kernel: audit: type=1130 audit(1769563803.603:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:03.622943 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 01:30:03.739061 kernel: audit: type=1131 audit(1769563803.604:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:03.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:03.930384 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (870) Jan 28 01:30:03.965451 kernel: BTRFS info (device vda6): first mount of filesystem 2e7ebc59-41d8-45a2-a17d-e5d2a56a196e Jan 28 01:30:03.965540 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:30:04.026946 kernel: BTRFS info (device vda6): turning on async discard Jan 28 01:30:04.027109 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 01:30:04.098537 kernel: BTRFS info (device vda6): last unmount of filesystem 2e7ebc59-41d8-45a2-a17d-e5d2a56a196e Jan 28 01:30:04.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:04.133808 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 01:30:04.211241 kernel: audit: type=1130 audit(1769563804.160:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:04.171604 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 01:30:04.434895 systemd-networkd[732]: eth0: Gained IPv6LL Jan 28 01:30:06.595821 ignition[889]: Ignition 2.24.0 Jan 28 01:30:06.595895 ignition[889]: Stage: fetch-offline Jan 28 01:30:06.595964 ignition[889]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:30:06.595985 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:30:06.596467 ignition[889]: parsed url from cmdline: "" Jan 28 01:30:06.596475 ignition[889]: no config URL provided Jan 28 01:30:06.596486 ignition[889]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 01:30:06.596503 ignition[889]: no config at "/usr/lib/ignition/user.ign" Jan 28 01:30:06.596580 ignition[889]: op(1): [started] loading QEMU firmware config module Jan 28 01:30:06.596670 ignition[889]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 28 01:30:06.948236 ignition[889]: op(1): [finished] loading QEMU firmware config module Jan 28 01:30:06.948769 ignition[889]: QEMU firmware config was not found. Ignoring... Jan 28 01:30:07.783926 ignition[889]: parsing config with SHA512: 7d1079440a895b26ed3adc370f750b1381ee36fe96fcddd2882cf060dfd893a30139ffd1918757c925b714ca91b9032a99001e737f29b140e95dcf4ae7f935f1 Jan 28 01:30:07.932427 unknown[889]: fetched base config from "system" Jan 28 01:30:07.932581 unknown[889]: fetched user config from "qemu" Jan 28 01:30:07.948151 ignition[889]: fetch-offline: fetch-offline passed Jan 28 01:30:07.948522 ignition[889]: Ignition finished successfully Jan 28 01:30:08.107487 kernel: audit: type=1130 audit(1769563807.985:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:07.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:07.968551 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:30:07.988249 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 28 01:30:07.992594 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 01:30:08.458520 ignition[900]: Ignition 2.24.0 Jan 28 01:30:08.476597 ignition[900]: Stage: kargs Jan 28 01:30:08.476847 ignition[900]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:30:08.476864 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:30:08.480088 ignition[900]: kargs: kargs passed Jan 28 01:30:08.480166 ignition[900]: Ignition finished successfully Jan 28 01:30:08.571035 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 01:30:08.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:08.634104 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 01:30:08.738238 kernel: audit: type=1130 audit(1769563808.607:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:08.959145 ignition[906]: Ignition 2.24.0 Jan 28 01:30:08.959161 ignition[906]: Stage: disks Jan 28 01:30:08.959631 ignition[906]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:30:08.959649 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:30:09.092825 ignition[906]: disks: disks passed Jan 28 01:30:09.093054 ignition[906]: Ignition finished successfully Jan 28 01:30:09.103096 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 01:30:09.204976 kernel: audit: type=1130 audit(1769563809.147:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:09.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:09.165427 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 01:30:09.251914 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 01:30:09.282284 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:30:09.301657 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:30:09.326139 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:30:09.417672 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 01:30:09.693728 systemd-fsck[915]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 28 01:30:09.719730 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 01:30:09.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:09.760728 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 01:30:09.846122 kernel: audit: type=1130 audit(1769563809.753:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:10.803366 kernel: EXT4-fs (vda9): mounted filesystem 89ee8811-a55f-4471-b9a6-3378249aa495 r/w with ordered data mode. Quota mode: none. Jan 28 01:30:10.813520 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 01:30:10.833729 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 01:30:10.862598 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:30:10.876466 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 01:30:10.888391 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 28 01:30:10.888452 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 01:30:10.888491 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:30:10.944049 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 01:30:10.965587 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 01:30:11.081860 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (924) Jan 28 01:30:11.175970 kernel: BTRFS info (device vda6): first mount of filesystem 2e7ebc59-41d8-45a2-a17d-e5d2a56a196e Jan 28 01:30:11.176048 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:30:11.218862 kernel: BTRFS info (device vda6): turning on async discard Jan 28 01:30:11.218954 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 01:30:11.246159 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:30:12.482167 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 01:30:12.536597 kernel: audit: type=1130 audit(1769563812.504:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:12.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:12.515919 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 01:30:12.587751 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 01:30:12.684281 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 01:30:12.729248 kernel: BTRFS info (device vda6): last unmount of filesystem 2e7ebc59-41d8-45a2-a17d-e5d2a56a196e Jan 28 01:30:12.987670 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 01:30:13.062061 kernel: audit: type=1130 audit(1769563813.006:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:13.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:13.199132 ignition[1022]: INFO : Ignition 2.24.0 Jan 28 01:30:13.199132 ignition[1022]: INFO : Stage: mount Jan 28 01:30:13.235505 ignition[1022]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:30:13.235505 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:30:13.266933 ignition[1022]: INFO : mount: mount passed Jan 28 01:30:13.266933 ignition[1022]: INFO : Ignition finished successfully Jan 28 01:30:13.310782 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 01:30:13.391462 kernel: audit: type=1130 audit(1769563813.326:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:13.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:13.335734 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 01:30:13.461681 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:30:13.660812 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1034) Jan 28 01:30:13.678580 kernel: BTRFS info (device vda6): first mount of filesystem 2e7ebc59-41d8-45a2-a17d-e5d2a56a196e Jan 28 01:30:13.678650 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:30:13.754402 kernel: BTRFS info (device vda6): turning on async discard Jan 28 01:30:13.754487 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 01:30:13.793702 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:30:14.077089 ignition[1051]: INFO : Ignition 2.24.0 Jan 28 01:30:14.077089 ignition[1051]: INFO : Stage: files Jan 28 01:30:14.077089 ignition[1051]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:30:14.077089 ignition[1051]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:30:14.137581 ignition[1051]: DEBUG : files: compiled without relabeling support, skipping Jan 28 01:30:14.182741 ignition[1051]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 01:30:14.182741 ignition[1051]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 01:30:14.257284 ignition[1051]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 01:30:14.274884 ignition[1051]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 01:30:14.293170 ignition[1051]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 01:30:14.283268 unknown[1051]: wrote ssh authorized keys file for user: core Jan 28 01:30:14.346920 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 01:30:14.346920 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 28 01:30:14.677060 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 01:30:15.404454 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 01:30:15.404454 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 28 01:30:15.491957 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 01:30:15.491957 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:30:15.491957 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:30:15.491957 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:30:15.491957 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:30:15.491957 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:30:15.491957 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:30:15.491957 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:30:15.491957 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:30:15.491957 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:30:15.926529 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:30:15.926529 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:30:15.926529 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 28 01:30:16.346754 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 28 01:30:25.092183 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:30:25.092183 ignition[1051]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 28 01:30:25.166001 ignition[1051]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:30:25.206055 ignition[1051]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:30:25.206055 ignition[1051]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 28 01:30:25.206055 ignition[1051]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 28 01:30:25.206055 ignition[1051]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 01:30:25.206055 ignition[1051]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 01:30:25.206055 ignition[1051]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 28 01:30:25.206055 ignition[1051]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 28 01:30:25.565898 ignition[1051]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 01:30:25.641706 ignition[1051]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 01:30:25.641706 ignition[1051]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 28 01:30:25.641706 ignition[1051]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 28 01:30:25.641706 ignition[1051]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 01:30:25.821686 ignition[1051]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:30:25.821686 ignition[1051]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:30:25.821686 ignition[1051]: INFO : files: files passed Jan 28 01:30:25.821686 ignition[1051]: INFO : Ignition finished successfully Jan 28 01:30:25.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:25.878617 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 01:30:25.939505 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 01:30:26.024607 kernel: audit: type=1130 audit(1769563825.919:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:25.989125 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 01:30:26.069873 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 01:30:26.070163 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 01:30:26.135715 kernel: audit: type=1130 audit(1769563826.086:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:26.135757 kernel: audit: type=1131 audit(1769563826.086:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:26.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:26.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:26.135948 initrd-setup-root-after-ignition[1082]: grep: /sysroot/oem/oem-release: No such file or directory Jan 28 01:30:26.171847 initrd-setup-root-after-ignition[1084]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:30:26.171847 initrd-setup-root-after-ignition[1084]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:30:26.277881 kernel: audit: type=1130 audit(1769563826.188:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:26.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:26.278207 initrd-setup-root-after-ignition[1088]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:30:26.183617 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:30:26.189173 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 01:30:26.192800 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 01:30:26.972762 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 01:30:27.103020 kernel: audit: type=1130 audit(1769563826.997:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:27.103166 kernel: audit: type=1131 audit(1769563826.997:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:26.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:26.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:26.975963 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 01:30:27.002567 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 01:30:27.158153 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 01:30:27.209814 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 01:30:27.271716 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 01:30:27.845600 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:30:27.907083 kernel: audit: type=1130 audit(1769563827.861:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:27.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:27.895914 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 01:30:28.155175 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 28 01:30:28.163659 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:30:28.177013 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:30:28.406148 kernel: audit: type=1131 audit(1769563828.336:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:28.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:28.204477 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 01:30:28.218943 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 01:30:28.223131 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:30:28.472730 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 01:30:28.501616 systemd[1]: Stopped target basic.target - Basic System. Jan 28 01:30:28.547193 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 01:30:28.571723 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:30:28.853651 kernel: audit: type=1131 audit(1769563828.667:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:28.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:28.603051 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 01:30:28.656107 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 28 01:30:28.665515 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 01:30:28.665765 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:30:28.665947 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 01:30:28.666454 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 01:30:30.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:28.666594 systemd[1]: Stopped target swap.target - Swaps. Jan 28 01:30:31.080075 kernel: audit: type=1131 audit(1769563830.902:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:28.666718 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 01:30:28.667159 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:30:28.756528 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:30:28.909429 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:30:29.051939 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 01:30:29.109837 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:30:29.910058 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 01:30:30.647992 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 01:30:31.855955 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 01:30:32.056499 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:30:32.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:32.200017 systemd[1]: Stopped target paths.target - Path Units. Jan 28 01:30:32.316538 kernel: audit: type=1131 audit(1769563832.197:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:32.287215 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 01:30:32.306639 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:30:32.490622 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 01:30:32.586491 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 01:30:32.614196 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 01:30:32.682184 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:30:32.791196 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 01:30:32.791666 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:30:32.854402 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 28 01:30:32.857018 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 28 01:30:33.450029 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 01:30:33.495620 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:30:33.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:33.660103 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 01:30:33.962679 kernel: audit: type=1131 audit(1769563833.608:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:33.962733 kernel: audit: type=1131 audit(1769563833.779:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:33.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:33.660698 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 01:30:33.836679 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 01:30:33.877681 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 01:30:34.102004 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 01:30:34.112148 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:30:34.204666 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 01:30:34.410528 kernel: audit: type=1131 audit(1769563834.204:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:34.410580 kernel: audit: type=1131 audit(1769563834.347:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:34.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:34.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:34.205066 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:30:34.645852 kernel: audit: type=1131 audit(1769563834.502:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:34.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:34.645954 ignition[1108]: INFO : Ignition 2.24.0 Jan 28 01:30:34.645954 ignition[1108]: INFO : Stage: umount Jan 28 01:30:34.645954 ignition[1108]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:30:34.645954 ignition[1108]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:30:34.645954 ignition[1108]: INFO : umount: umount passed Jan 28 01:30:34.645954 ignition[1108]: INFO : Ignition finished successfully Jan 28 01:30:34.347836 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 01:30:34.355020 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:30:35.107725 kernel: audit: type=1131 audit(1769563834.913:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:35.107776 kernel: audit: type=1131 audit(1769563835.003:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:34.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:35.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:34.744779 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 01:30:35.383407 kernel: audit: type=1130 audit(1769563835.186:57): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:35.384485 kernel: audit: type=1131 audit(1769563835.186:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:35.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:35.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:34.757749 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 01:30:34.757929 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 01:30:34.914840 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 01:30:34.915076 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 01:30:35.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:35.065640 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 01:30:35.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:35.065937 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 01:30:35.580843 systemd[1]: Stopped target network.target - Network. Jan 28 01:30:35.694004 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 01:30:35.694530 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 01:30:35.786835 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 01:30:35.787091 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 01:30:35.935069 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 01:30:35.935207 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 01:30:36.248596 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 01:30:36.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:36.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:36.257726 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 01:30:36.320126 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 01:30:36.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:36.328835 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 01:30:36.458802 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 01:30:36.526897 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 01:30:36.604520 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 01:30:36.604886 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 01:30:36.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:36.772174 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 01:30:36.780846 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 01:30:36.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:36.892999 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 28 01:30:36.896000 audit: BPF prog-id=6 op=UNLOAD Jan 28 01:30:36.993000 audit: BPF prog-id=9 op=UNLOAD Jan 28 01:30:36.946471 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 01:30:37.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:36.946606 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:30:37.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:37.008663 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 01:30:37.321540 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 28 01:30:37.321648 kernel: audit: type=1131 audit(1769563837.227:70): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:37.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:37.069617 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 01:30:37.069841 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:30:37.130973 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 01:30:37.131089 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:30:37.631583 kernel: audit: type=1131 audit(1769563837.501:71): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:37.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:37.178801 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 01:30:37.178988 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 01:30:37.227959 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:30:37.458878 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 01:30:37.459231 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:30:37.627974 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 01:30:37.967186 kernel: audit: type=1131 audit(1769563837.879:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:37.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:37.628100 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 01:30:37.727835 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 01:30:37.727917 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:30:37.761786 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 01:30:37.761973 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:30:38.115711 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 01:30:38.133083 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 01:30:38.389945 kernel: audit: type=1131 audit(1769563838.163:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:38.390109 kernel: audit: type=1131 audit(1769563838.164:74): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:38.390131 kernel: audit: type=1131 audit(1769563838.167:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:38.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:38.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:38.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:38.164683 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:30:38.164806 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:30:38.167543 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 01:30:38.167594 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 28 01:30:38.167682 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:30:38.167805 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 01:30:38.167882 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:30:38.167986 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 28 01:30:38.168057 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:30:38.168156 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 01:30:38.719573 kernel: audit: type=1131 audit(1769563838.167:76): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:38.720093 kernel: audit: type=1131 audit(1769563838.167:77): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:38.720119 kernel: audit: type=1131 audit(1769563838.173:78): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:38.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:38.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:38.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:38.726800 kernel: audit: type=1131 audit(1769563838.173:79): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:38.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:38.168224 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:30:38.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:38.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:38.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:38.173517 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:30:38.173657 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:30:38.177028 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 01:30:38.177205 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 01:30:38.742263 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 01:30:38.742713 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 01:30:38.790857 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 01:30:38.834592 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 01:30:39.145051 systemd[1]: Switching root. Jan 28 01:30:39.658543 systemd-journald[322]: Journal stopped Jan 28 01:30:50.029269 systemd-journald[322]: Received SIGTERM from PID 1 (systemd). Jan 28 01:30:50.029914 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 01:30:50.029939 kernel: SELinux: policy capability open_perms=1 Jan 28 01:30:50.029957 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 01:30:50.029984 kernel: SELinux: policy capability always_check_network=0 Jan 28 01:30:50.030001 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 01:30:50.030022 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 01:30:50.030041 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 01:30:50.030122 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 01:30:50.030147 kernel: SELinux: policy capability userspace_initial_context=0 Jan 28 01:30:50.030165 systemd[1]: Successfully loaded SELinux policy in 339.534ms. Jan 28 01:30:50.030256 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 45.744ms. Jan 28 01:30:50.030281 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 01:30:50.035208 systemd[1]: Detected virtualization kvm. Jan 28 01:30:50.035271 systemd[1]: Detected architecture x86-64. Jan 28 01:30:50.035503 systemd[1]: Detected first boot. Jan 28 01:30:50.035530 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 28 01:30:50.035551 zram_generator::config[1153]: No configuration found. Jan 28 01:30:50.035573 kernel: Guest personality initialized and is inactive Jan 28 01:30:50.035601 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 28 01:30:50.035621 kernel: Initialized host personality Jan 28 01:30:50.035701 kernel: NET: Registered PF_VSOCK protocol family Jan 28 01:30:50.035780 systemd[1]: Populated /etc with preset unit settings. Jan 28 01:30:50.035799 kernel: kauditd_printk_skb: 9 callbacks suppressed Jan 28 01:30:50.035816 kernel: audit: type=1334 audit(1769563846.081:89): prog-id=12 op=LOAD Jan 28 01:30:50.035833 kernel: audit: type=1334 audit(1769563846.082:90): prog-id=3 op=UNLOAD Jan 28 01:30:50.035849 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 01:30:50.035869 kernel: audit: type=1334 audit(1769563846.082:91): prog-id=13 op=LOAD Jan 28 01:30:50.035887 kernel: audit: type=1334 audit(1769563846.082:92): prog-id=14 op=LOAD Jan 28 01:30:50.036016 kernel: audit: type=1334 audit(1769563846.082:93): prog-id=4 op=UNLOAD Jan 28 01:30:50.037814 kernel: audit: type=1334 audit(1769563846.082:94): prog-id=5 op=UNLOAD Jan 28 01:30:50.037839 kernel: audit: type=1131 audit(1769563846.089:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:50.037860 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 01:30:50.037880 kernel: audit: type=1130 audit(1769563846.359:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:50.037900 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 01:30:50.037920 kernel: audit: type=1131 audit(1769563846.359:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:50.040477 kernel: audit: type=1334 audit(1769563846.453:98): prog-id=12 op=UNLOAD Jan 28 01:30:50.040510 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 01:30:50.040535 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 01:30:50.040555 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 01:30:50.040575 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 01:30:50.040667 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 01:30:50.040756 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 01:30:50.040780 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 01:30:50.040857 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 01:30:50.040934 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:30:50.040956 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:30:50.040977 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 01:30:50.041065 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 01:30:50.041088 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 01:30:50.041108 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:30:50.041126 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 28 01:30:50.041145 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:30:50.041165 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:30:50.041184 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 01:30:50.041264 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 01:30:50.041286 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 01:30:50.041519 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 01:30:50.041541 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:30:50.041561 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:30:50.041579 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 28 01:30:50.041596 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:30:50.041687 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:30:50.041770 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 01:30:50.041792 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 01:30:50.041812 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 28 01:30:50.041831 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 28 01:30:50.041848 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 28 01:30:50.041937 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:30:50.041959 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 28 01:30:50.042047 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 28 01:30:50.042070 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:30:50.042090 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:30:50.042108 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 01:30:50.042125 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 01:30:50.042144 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 01:30:50.042223 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 01:30:50.045730 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:30:50.045754 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 01:30:50.045772 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 01:30:50.045789 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 01:30:50.045811 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 01:30:50.045888 systemd[1]: Reached target machines.target - Containers. Jan 28 01:30:50.045980 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 01:30:50.046065 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:30:50.046087 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:30:50.046108 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 01:30:50.046127 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:30:50.046205 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:30:50.046226 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:30:50.046501 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 01:30:50.046526 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:30:50.046545 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 01:30:50.046565 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 01:30:50.046588 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 01:30:50.046680 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 01:30:50.046702 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 01:30:50.046774 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 01:30:50.046795 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:30:50.046813 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:30:50.046831 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 01:30:50.046909 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 01:30:50.046929 kernel: fuse: init (API version 7.41) Jan 28 01:30:50.046948 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 28 01:30:50.046965 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:30:50.047064 systemd-journald[1239]: Collecting audit messages is enabled. Jan 28 01:30:50.047100 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:30:50.047188 systemd-journald[1239]: Journal started Jan 28 01:30:50.047224 systemd-journald[1239]: Runtime Journal (/run/log/journal/d6d3ccaa41cf4edb9386aff108df043d) is 6M, max 48M, 42M free. Jan 28 01:30:48.145000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 28 01:30:49.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:49.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:49.541000 audit: BPF prog-id=14 op=UNLOAD Jan 28 01:30:49.541000 audit: BPF prog-id=13 op=UNLOAD Jan 28 01:30:49.569000 audit: BPF prog-id=15 op=LOAD Jan 28 01:30:49.580000 audit: BPF prog-id=16 op=LOAD Jan 28 01:30:49.624000 audit: BPF prog-id=17 op=LOAD Jan 28 01:30:46.054178 systemd[1]: Queued start job for default target multi-user.target. Jan 28 01:30:50.019000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 28 01:30:50.019000 audit[1239]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd97c6c3d0 a2=4000 a3=0 items=0 ppid=1 pid=1239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:30:50.019000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 28 01:30:46.083764 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 28 01:30:46.089082 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 01:30:46.089875 systemd[1]: systemd-journald.service: Consumed 3.871s CPU time. Jan 28 01:30:50.112128 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:30:50.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:50.132007 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 01:30:50.138925 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 01:30:50.147206 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 01:30:50.156701 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 01:30:50.166138 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 01:30:50.174535 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 01:30:50.182113 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 01:30:50.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:50.203938 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:30:50.209451 kernel: ACPI: bus type drm_connector registered Jan 28 01:30:50.228200 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 01:30:50.228890 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 01:30:50.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:50.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:50.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:50.250878 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:30:50.251285 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:30:50.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:50.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:50.271095 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:30:50.278621 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:30:50.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:50.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:50.295036 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:30:50.302698 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:30:50.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:50.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:50.329090 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 01:30:50.329691 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 01:30:50.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:50.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:50.353789 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:30:50.354517 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:30:50.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:50.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:50.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:50.370804 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:30:50.400847 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:30:50.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:50.433878 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 01:30:50.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:50.476076 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 28 01:30:50.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:50.515928 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:30:50.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:50.592562 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 01:30:50.607891 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 28 01:30:50.623792 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 01:30:50.653817 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 01:30:50.670568 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 01:30:50.670667 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:30:50.690833 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 28 01:30:50.714755 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:30:50.715623 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 01:30:50.733169 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 01:30:50.752573 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 01:30:50.769601 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:30:50.773117 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 01:30:50.786714 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:30:50.799632 systemd-journald[1239]: Time spent on flushing to /var/log/journal/d6d3ccaa41cf4edb9386aff108df043d is 96.003ms for 1238 entries. Jan 28 01:30:50.799632 systemd-journald[1239]: System Journal (/var/log/journal/d6d3ccaa41cf4edb9386aff108df043d) is 8M, max 163.5M, 155.5M free. Jan 28 01:30:51.544003 systemd-journald[1239]: Received client request to flush runtime journal. Jan 28 01:30:50.797681 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:30:50.904681 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 01:30:51.277838 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:30:51.419539 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 01:30:51.473651 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 01:30:51.503553 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 01:30:51.582769 kernel: kauditd_printk_skb: 31 callbacks suppressed Jan 28 01:30:51.582966 kernel: audit: type=1130 audit(1769563851.548:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:51.583019 kernel: loop1: detected capacity change from 0 to 111560 Jan 28 01:30:51.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:51.578844 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 01:30:51.689760 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 28 01:30:51.751832 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 01:30:51.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:51.867508 kernel: audit: type=1130 audit(1769563851.793:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:51.889754 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:30:51.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:52.016587 kernel: audit: type=1130 audit(1769563851.945:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:52.060062 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Jan 28 01:30:52.060095 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Jan 28 01:30:52.109752 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 01:30:52.140561 kernel: loop2: detected capacity change from 0 to 224512 Jan 28 01:30:52.140709 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:30:52.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:52.180678 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 28 01:30:52.250618 kernel: audit: type=1130 audit(1769563852.179:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:52.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:52.338729 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 01:30:52.391650 kernel: audit: type=1130 audit(1769563852.297:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:52.579704 kernel: loop3: detected capacity change from 0 to 50784 Jan 28 01:30:53.023036 kernel: loop4: detected capacity change from 0 to 111560 Jan 28 01:30:53.035816 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 01:30:53.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:53.107888 kernel: audit: type=1130 audit(1769563853.055:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:53.117163 kernel: audit: type=1334 audit(1769563853.087:134): prog-id=18 op=LOAD Jan 28 01:30:53.117203 kernel: audit: type=1334 audit(1769563853.087:135): prog-id=19 op=LOAD Jan 28 01:30:53.087000 audit: BPF prog-id=18 op=LOAD Jan 28 01:30:53.087000 audit: BPF prog-id=19 op=LOAD Jan 28 01:30:53.137175 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 28 01:30:53.164063 kernel: audit: type=1334 audit(1769563853.087:136): prog-id=20 op=LOAD Jan 28 01:30:53.087000 audit: BPF prog-id=20 op=LOAD Jan 28 01:30:53.207689 kernel: audit: type=1334 audit(1769563853.199:137): prog-id=21 op=LOAD Jan 28 01:30:53.199000 audit: BPF prog-id=21 op=LOAD Jan 28 01:30:53.207517 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:30:53.239896 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:30:53.263000 audit: BPF prog-id=22 op=LOAD Jan 28 01:30:53.263000 audit: BPF prog-id=23 op=LOAD Jan 28 01:30:53.263000 audit: BPF prog-id=24 op=LOAD Jan 28 01:30:53.265938 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 28 01:30:53.276000 audit: BPF prog-id=25 op=LOAD Jan 28 01:30:53.276000 audit: BPF prog-id=26 op=LOAD Jan 28 01:30:53.277000 audit: BPF prog-id=27 op=LOAD Jan 28 01:30:53.280681 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 01:30:53.315154 kernel: loop5: detected capacity change from 0 to 224512 Jan 28 01:30:53.401241 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Jan 28 01:30:53.402066 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Jan 28 01:30:53.427987 kernel: loop6: detected capacity change from 0 to 50784 Jan 28 01:30:53.445142 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:30:53.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:53.573096 (sd-merge)[1296]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 28 01:30:53.603707 (sd-merge)[1296]: Merged extensions into '/usr'. Jan 28 01:30:53.627598 systemd-nsresourced[1301]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 28 01:30:53.638522 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 28 01:30:53.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:53.705662 systemd[1]: Reload requested from client PID 1275 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 01:30:53.705755 systemd[1]: Reloading... Jan 28 01:30:54.664066 systemd-oomd[1298]: No swap; memory pressure usage will be degraded Jan 28 01:30:54.911818 zram_generator::config[1358]: No configuration found. Jan 28 01:30:55.449956 systemd-resolved[1299]: Positive Trust Anchors: Jan 28 01:30:55.450024 systemd-resolved[1299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:30:55.450032 systemd-resolved[1299]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 28 01:30:55.450082 systemd-resolved[1299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:30:55.490693 systemd-resolved[1299]: Defaulting to hostname 'linux'. Jan 28 01:30:56.571792 systemd[1]: Reloading finished in 2862 ms. Jan 28 01:30:56.678990 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 01:30:56.703586 kernel: kauditd_printk_skb: 8 callbacks suppressed Jan 28 01:30:56.703744 kernel: audit: type=1130 audit(1769563856.690:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:56.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:56.692588 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 28 01:30:56.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:56.834916 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:30:57.003555 kernel: audit: type=1130 audit(1769563856.829:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:57.024733 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 01:30:57.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:57.088477 kernel: audit: type=1130 audit(1769563857.022:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:57.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:57.152826 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 01:30:57.182683 kernel: audit: type=1130 audit(1769563857.135:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:57.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:57.214082 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:30:57.247772 kernel: audit: type=1130 audit(1769563857.191:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:57.266758 systemd[1]: Starting ensure-sysext.service... Jan 28 01:30:57.292946 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:30:57.313000 audit: BPF prog-id=8 op=UNLOAD Jan 28 01:30:57.313000 audit: BPF prog-id=7 op=UNLOAD Jan 28 01:30:57.338911 kernel: audit: type=1334 audit(1769563857.313:151): prog-id=8 op=UNLOAD Jan 28 01:30:57.338981 kernel: audit: type=1334 audit(1769563857.313:152): prog-id=7 op=UNLOAD Jan 28 01:30:57.354000 audit: BPF prog-id=28 op=LOAD Jan 28 01:30:57.354000 audit: BPF prog-id=29 op=LOAD Jan 28 01:30:57.372946 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:30:57.386436 kernel: audit: type=1334 audit(1769563857.354:153): prog-id=28 op=LOAD Jan 28 01:30:57.386526 kernel: audit: type=1334 audit(1769563857.354:154): prog-id=29 op=LOAD Jan 28 01:30:57.423000 audit: BPF prog-id=30 op=LOAD Jan 28 01:30:57.441965 kernel: audit: type=1334 audit(1769563857.423:155): prog-id=30 op=LOAD Jan 28 01:30:57.426000 audit: BPF prog-id=21 op=UNLOAD Jan 28 01:30:57.427000 audit: BPF prog-id=31 op=LOAD Jan 28 01:30:57.427000 audit: BPF prog-id=25 op=UNLOAD Jan 28 01:30:57.427000 audit: BPF prog-id=32 op=LOAD Jan 28 01:30:57.427000 audit: BPF prog-id=33 op=LOAD Jan 28 01:30:57.427000 audit: BPF prog-id=26 op=UNLOAD Jan 28 01:30:57.427000 audit: BPF prog-id=27 op=UNLOAD Jan 28 01:30:57.449000 audit: BPF prog-id=34 op=LOAD Jan 28 01:30:57.463000 audit: BPF prog-id=15 op=UNLOAD Jan 28 01:30:57.463000 audit: BPF prog-id=35 op=LOAD Jan 28 01:30:57.463000 audit: BPF prog-id=36 op=LOAD Jan 28 01:30:57.463000 audit: BPF prog-id=16 op=UNLOAD Jan 28 01:30:57.463000 audit: BPF prog-id=17 op=UNLOAD Jan 28 01:30:57.470000 audit: BPF prog-id=37 op=LOAD Jan 28 01:30:57.471000 audit: BPF prog-id=22 op=UNLOAD Jan 28 01:30:57.471000 audit: BPF prog-id=38 op=LOAD Jan 28 01:30:57.471000 audit: BPF prog-id=39 op=LOAD Jan 28 01:30:57.471000 audit: BPF prog-id=23 op=UNLOAD Jan 28 01:30:57.471000 audit: BPF prog-id=24 op=UNLOAD Jan 28 01:30:57.473000 audit: BPF prog-id=40 op=LOAD Jan 28 01:30:57.473000 audit: BPF prog-id=18 op=UNLOAD Jan 28 01:30:57.473000 audit: BPF prog-id=41 op=LOAD Jan 28 01:30:57.473000 audit: BPF prog-id=42 op=LOAD Jan 28 01:30:57.473000 audit: BPF prog-id=19 op=UNLOAD Jan 28 01:30:57.474134 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 28 01:30:57.474183 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 28 01:30:57.476071 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 01:30:57.483000 audit: BPF prog-id=20 op=UNLOAD Jan 28 01:30:57.490864 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Jan 28 01:30:57.491004 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Jan 28 01:30:57.536029 systemd[1]: Reload requested from client PID 1383 ('systemctl') (unit ensure-sysext.service)... Jan 28 01:30:57.536109 systemd[1]: Reloading... Jan 28 01:30:57.562070 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:30:57.562097 systemd-tmpfiles[1384]: Skipping /boot Jan 28 01:30:57.659492 systemd-udevd[1385]: Using default interface naming scheme 'v257'. Jan 28 01:30:57.675490 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:30:57.675638 systemd-tmpfiles[1384]: Skipping /boot Jan 28 01:30:58.005565 zram_generator::config[1417]: No configuration found. Jan 28 01:30:58.749658 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 28 01:30:58.768454 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 01:30:58.782491 kernel: ACPI: button: Power Button [PWRF] Jan 28 01:30:59.033563 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 01:30:59.087736 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 28 01:30:59.088201 systemd[1]: Reloading finished in 1551 ms. Jan 28 01:30:59.126614 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:30:59.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:59.171000 audit: BPF prog-id=43 op=LOAD Jan 28 01:30:59.171000 audit: BPF prog-id=30 op=UNLOAD Jan 28 01:30:59.172000 audit: BPF prog-id=44 op=LOAD Jan 28 01:30:59.174000 audit: BPF prog-id=45 op=LOAD Jan 28 01:30:59.175000 audit: BPF prog-id=28 op=UNLOAD Jan 28 01:30:59.175000 audit: BPF prog-id=29 op=UNLOAD Jan 28 01:30:59.178000 audit: BPF prog-id=46 op=LOAD Jan 28 01:30:59.183000 audit: BPF prog-id=34 op=UNLOAD Jan 28 01:30:59.183000 audit: BPF prog-id=47 op=LOAD Jan 28 01:30:59.183000 audit: BPF prog-id=48 op=LOAD Jan 28 01:30:59.183000 audit: BPF prog-id=35 op=UNLOAD Jan 28 01:30:59.183000 audit: BPF prog-id=36 op=UNLOAD Jan 28 01:30:59.185000 audit: BPF prog-id=49 op=LOAD Jan 28 01:30:59.188000 audit: BPF prog-id=37 op=UNLOAD Jan 28 01:30:59.188000 audit: BPF prog-id=50 op=LOAD Jan 28 01:30:59.188000 audit: BPF prog-id=51 op=LOAD Jan 28 01:30:59.188000 audit: BPF prog-id=38 op=UNLOAD Jan 28 01:30:59.190000 audit: BPF prog-id=39 op=UNLOAD Jan 28 01:30:59.193000 audit: BPF prog-id=52 op=LOAD Jan 28 01:30:59.193000 audit: BPF prog-id=40 op=UNLOAD Jan 28 01:30:59.193000 audit: BPF prog-id=53 op=LOAD Jan 28 01:30:59.194000 audit: BPF prog-id=54 op=LOAD Jan 28 01:30:59.194000 audit: BPF prog-id=41 op=UNLOAD Jan 28 01:30:59.194000 audit: BPF prog-id=42 op=UNLOAD Jan 28 01:30:59.194000 audit: BPF prog-id=55 op=LOAD Jan 28 01:30:59.194000 audit: BPF prog-id=31 op=UNLOAD Jan 28 01:30:59.194000 audit: BPF prog-id=56 op=LOAD Jan 28 01:30:59.194000 audit: BPF prog-id=57 op=LOAD Jan 28 01:30:59.194000 audit: BPF prog-id=32 op=UNLOAD Jan 28 01:30:59.194000 audit: BPF prog-id=33 op=UNLOAD Jan 28 01:30:59.216799 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:30:59.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:59.315872 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 28 01:30:59.316764 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 28 01:30:59.378440 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 28 01:30:59.480804 systemd[1]: Finished ensure-sysext.service. Jan 28 01:30:59.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:59.589987 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:30:59.592764 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 01:30:59.622924 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 01:30:59.649020 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:30:59.757588 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:30:59.904776 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:31:00.041919 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:31:00.130776 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:31:00.151635 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:31:00.157593 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 01:31:00.199007 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 01:31:00.234614 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 01:31:00.288512 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 01:31:00.348897 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 01:31:00.387000 audit: BPF prog-id=58 op=LOAD Jan 28 01:31:00.404197 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:31:00.481000 audit: BPF prog-id=59 op=LOAD Jan 28 01:31:00.488856 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 28 01:31:00.542658 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 01:31:00.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:00.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:00.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:00.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:00.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:00.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:00.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:00.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:00.565677 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:31:00.565888 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:31:00.568663 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:31:00.569177 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:31:00.569894 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:31:00.570221 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:31:00.570836 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:31:00.571131 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:31:00.571751 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:31:00.572046 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:31:00.940511 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:31:00.940855 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:31:01.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:01.042242 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 01:31:01.074078 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 01:31:01.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:01.372000 audit[1523]: SYSTEM_BOOT pid=1523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 28 01:31:01.414078 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 01:31:01.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:01.713554 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 01:31:01.729848 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 01:31:01.758729 kernel: kauditd_printk_skb: 72 callbacks suppressed Jan 28 01:31:01.758859 kernel: audit: type=1130 audit(1769563861.722:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:01.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:01.768000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 28 01:31:01.794453 kernel: audit: type=1305 audit(1769563861.768:229): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 28 01:31:01.794689 augenrules[1545]: No rules Jan 28 01:31:01.768000 audit[1545]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe90466e40 a2=420 a3=0 items=0 ppid=1499 pid=1545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:02.014460 kernel: audit: type=1300 audit(1769563861.768:229): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe90466e40 a2=420 a3=0 items=0 ppid=1499 pid=1545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:02.014632 kernel: audit: type=1327 audit(1769563861.768:229): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 01:31:01.768000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 01:31:01.970729 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 01:31:02.013750 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 01:31:02.376447 systemd-networkd[1516]: lo: Link UP Jan 28 01:31:02.390202 systemd-networkd[1516]: lo: Gained carrier Jan 28 01:31:02.417964 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:31:02.431862 systemd-networkd[1516]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:31:02.431879 systemd-networkd[1516]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:31:02.442153 systemd-networkd[1516]: eth0: Link UP Jan 28 01:31:02.447067 systemd-networkd[1516]: eth0: Gained carrier Jan 28 01:31:02.447213 systemd-networkd[1516]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:31:02.474170 systemd[1]: Reached target network.target - Network. Jan 28 01:31:02.553105 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 28 01:31:02.594096 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 01:31:02.635808 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 28 01:31:02.636232 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 01:31:02.683934 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:31:02.753707 systemd-networkd[1516]: eth0: DHCPv4 address 10.0.0.88/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 01:31:02.756126 systemd-timesyncd[1519]: Network configuration changed, trying to establish connection. Jan 28 01:31:03.297218 systemd-resolved[1299]: Clock change detected. Flushing caches. Jan 28 01:31:03.297535 systemd-timesyncd[1519]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 28 01:31:03.297678 systemd-timesyncd[1519]: Initial clock synchronization to Wed 2026-01-28 01:31:03.296121 UTC. Jan 28 01:31:03.518602 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 28 01:31:04.877044 systemd-networkd[1516]: eth0: Gained IPv6LL Jan 28 01:31:04.958619 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 01:31:04.979693 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 01:31:08.796460 kernel: kvm_amd: TSC scaling supported Jan 28 01:31:08.800607 kernel: kvm_amd: Nested Virtualization enabled Jan 28 01:31:08.800746 kernel: kvm_amd: Nested Paging enabled Jan 28 01:31:08.805761 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 28 01:31:08.815595 kernel: kvm_amd: PMU virtualization is disabled Jan 28 01:31:09.471663 ldconfig[1511]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 01:31:09.582154 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 01:31:09.947124 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 01:31:10.713215 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 01:31:10.762603 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:31:10.791108 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 01:31:10.869112 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 01:31:10.922118 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 28 01:31:11.000073 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 01:31:11.030047 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 01:31:11.053110 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 28 01:31:11.079929 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 28 01:31:11.103928 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 01:31:11.152899 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 01:31:11.153052 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:31:11.168713 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:31:11.222204 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 01:31:11.286020 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 01:31:11.446570 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 28 01:31:11.494236 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 28 01:31:11.560718 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 28 01:31:11.789207 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 01:31:11.898135 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 28 01:31:11.929155 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 01:31:12.038181 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:31:12.139944 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:31:12.191971 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:31:12.192126 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:31:12.221065 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 01:31:12.286888 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 28 01:31:12.434196 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 01:31:12.526976 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 01:31:12.622243 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 01:31:12.822085 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 01:31:12.879085 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 01:31:12.971026 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 28 01:31:13.065905 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:31:13.140747 jq[1569]: false Jan 28 01:31:13.172927 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 01:31:13.298494 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Refreshing passwd entry cache Jan 28 01:31:13.297534 oslogin_cache_refresh[1571]: Refreshing passwd entry cache Jan 28 01:31:13.326774 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Failure getting users, quitting Jan 28 01:31:13.326927 oslogin_cache_refresh[1571]: Failure getting users, quitting Jan 28 01:31:13.327677 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 01:31:13.327677 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Refreshing group entry cache Jan 28 01:31:13.326971 oslogin_cache_refresh[1571]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 01:31:13.327058 oslogin_cache_refresh[1571]: Refreshing group entry cache Jan 28 01:31:13.358054 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Failure getting groups, quitting Jan 28 01:31:13.364453 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 01:31:13.360460 oslogin_cache_refresh[1571]: Failure getting groups, quitting Jan 28 01:31:13.360498 oslogin_cache_refresh[1571]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 01:31:13.373213 extend-filesystems[1570]: Found /dev/vda6 Jan 28 01:31:13.391018 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 01:31:13.403732 extend-filesystems[1570]: Found /dev/vda9 Jan 28 01:31:13.452760 extend-filesystems[1570]: Checking size of /dev/vda9 Jan 28 01:31:13.452803 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 01:31:13.479593 extend-filesystems[1570]: Resized partition /dev/vda9 Jan 28 01:31:13.505755 extend-filesystems[1590]: resize2fs 1.47.3 (8-Jul-2025) Jan 28 01:31:13.543162 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 28 01:31:13.516951 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 01:31:13.564687 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 01:31:13.877237 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 01:31:14.021913 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 01:31:14.030896 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 01:31:14.077992 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 01:31:14.278961 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 01:31:14.763748 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 01:31:15.218113 update_engine[1599]: I20260128 01:31:14.982221 1599 main.cc:92] Flatcar Update Engine starting Jan 28 01:31:14.842909 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 01:31:14.921799 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 01:31:14.924221 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 28 01:31:15.282726 jq[1600]: true Jan 28 01:31:14.926524 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 28 01:31:14.969184 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 01:31:14.971135 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 01:31:15.014637 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 01:31:15.025007 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 01:31:15.025619 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 01:31:15.547014 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 28 01:31:15.639145 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 01:31:16.136502 jq[1611]: true Jan 28 01:31:16.178212 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 28 01:31:16.203859 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 28 01:31:16.344769 extend-filesystems[1590]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 28 01:31:16.344769 extend-filesystems[1590]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 28 01:31:16.344769 extend-filesystems[1590]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 28 01:31:16.625623 extend-filesystems[1570]: Resized filesystem in /dev/vda9 Jan 28 01:31:16.408990 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 01:31:16.410154 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 01:31:16.649079 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 01:31:16.761665 tar[1609]: linux-amd64/LICENSE Jan 28 01:31:16.761665 tar[1609]: linux-amd64/helm Jan 28 01:31:17.113930 dbus-daemon[1567]: [system] SELinux support is enabled Jan 28 01:31:17.116900 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 01:31:17.146668 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 01:31:17.146734 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 01:31:17.170662 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 01:31:17.170721 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 01:31:17.235126 bash[1654]: Updated "/home/core/.ssh/authorized_keys" Jan 28 01:31:17.249226 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 01:31:17.265986 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 28 01:31:17.272241 systemd[1]: Started update-engine.service - Update Engine. Jan 28 01:31:17.314126 update_engine[1599]: I20260128 01:31:17.280709 1599 update_check_scheduler.cc:74] Next update check in 4m51s Jan 28 01:31:17.315574 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 01:31:17.357696 systemd-logind[1594]: Watching system buttons on /dev/input/event2 (Power Button) Jan 28 01:31:17.359064 systemd-logind[1594]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 01:31:17.383388 systemd-logind[1594]: New seat seat0. Jan 28 01:31:17.394578 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 01:31:18.469969 sshd_keygen[1601]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 01:31:19.372766 locksmithd[1656]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 01:31:19.593925 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 01:31:19.664934 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 01:31:19.727192 systemd[1]: Started sshd@0-10.0.0.88:22-10.0.0.1:41554.service - OpenSSH per-connection server daemon (10.0.0.1:41554). Jan 28 01:31:19.924778 kernel: EDAC MC: Ver: 3.0.0 Jan 28 01:31:20.363001 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 01:31:20.383561 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 01:31:20.503851 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 01:31:21.420951 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 01:31:21.475136 sshd[1676]: Access denied for user core by PAM account configuration [preauth] Jan 28 01:31:21.511064 systemd[1]: sshd@0-10.0.0.88:22-10.0.0.1:41554.service: Deactivated successfully. Jan 28 01:31:21.812952 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 01:31:21.875970 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 28 01:31:21.886147 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 01:31:23.994803 containerd[1612]: time="2026-01-28T01:31:23Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 28 01:31:24.019833 containerd[1612]: time="2026-01-28T01:31:24.015929697Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 28 01:31:24.486774 containerd[1612]: time="2026-01-28T01:31:24.486701318Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="111.009µs" Jan 28 01:31:24.488422 containerd[1612]: time="2026-01-28T01:31:24.488249419Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 28 01:31:24.488736 containerd[1612]: time="2026-01-28T01:31:24.488710179Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 28 01:31:24.488853 containerd[1612]: time="2026-01-28T01:31:24.488834070Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 28 01:31:24.489618 containerd[1612]: time="2026-01-28T01:31:24.489593618Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 28 01:31:24.489714 containerd[1612]: time="2026-01-28T01:31:24.489695479Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 01:31:24.489977 containerd[1612]: time="2026-01-28T01:31:24.489954592Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 01:31:24.490047 containerd[1612]: time="2026-01-28T01:31:24.490032367Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 01:31:24.492717 containerd[1612]: time="2026-01-28T01:31:24.492682484Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 01:31:24.492824 containerd[1612]: time="2026-01-28T01:31:24.492800785Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 01:31:24.492923 containerd[1612]: time="2026-01-28T01:31:24.492900852Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 01:31:24.492997 containerd[1612]: time="2026-01-28T01:31:24.492979209Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 28 01:31:24.493745 containerd[1612]: time="2026-01-28T01:31:24.493721074Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 28 01:31:24.493835 containerd[1612]: time="2026-01-28T01:31:24.493815600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 28 01:31:24.494032 containerd[1612]: time="2026-01-28T01:31:24.494009843Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 28 01:31:24.494635 containerd[1612]: time="2026-01-28T01:31:24.494611085Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 01:31:24.494955 containerd[1612]: time="2026-01-28T01:31:24.494932034Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 01:31:24.496886 containerd[1612]: time="2026-01-28T01:31:24.495027643Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 28 01:31:24.497077 containerd[1612]: time="2026-01-28T01:31:24.496985237Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 28 01:31:24.515866 containerd[1612]: time="2026-01-28T01:31:24.515799002Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 28 01:31:24.516206 containerd[1612]: time="2026-01-28T01:31:24.516172950Z" level=info msg="metadata content store policy set" policy=shared Jan 28 01:31:24.588849 containerd[1612]: time="2026-01-28T01:31:24.588779312Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 28 01:31:24.590513 containerd[1612]: time="2026-01-28T01:31:24.589683510Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 28 01:31:24.590513 containerd[1612]: time="2026-01-28T01:31:24.589838459Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 28 01:31:24.590513 containerd[1612]: time="2026-01-28T01:31:24.589868135Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 28 01:31:24.590513 containerd[1612]: time="2026-01-28T01:31:24.589964675Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 28 01:31:24.590513 containerd[1612]: time="2026-01-28T01:31:24.590057397Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 28 01:31:24.590513 containerd[1612]: time="2026-01-28T01:31:24.590079830Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 28 01:31:24.590513 containerd[1612]: time="2026-01-28T01:31:24.590094728Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 28 01:31:24.590513 containerd[1612]: time="2026-01-28T01:31:24.590116428Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 28 01:31:24.590513 containerd[1612]: time="2026-01-28T01:31:24.590137117Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 28 01:31:24.590513 containerd[1612]: time="2026-01-28T01:31:24.590153497Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 28 01:31:24.590513 containerd[1612]: time="2026-01-28T01:31:24.590168746Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 28 01:31:24.590513 containerd[1612]: time="2026-01-28T01:31:24.590185407Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 28 01:31:24.590513 containerd[1612]: time="2026-01-28T01:31:24.590202058Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 28 01:31:24.591548 containerd[1612]: time="2026-01-28T01:31:24.591139669Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 28 01:31:24.591548 containerd[1612]: time="2026-01-28T01:31:24.591415273Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 28 01:31:24.594131 containerd[1612]: time="2026-01-28T01:31:24.591894318Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 28 01:31:24.594131 containerd[1612]: time="2026-01-28T01:31:24.591985677Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 28 01:31:24.594131 containerd[1612]: time="2026-01-28T01:31:24.592005074Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 28 01:31:24.594131 containerd[1612]: time="2026-01-28T01:31:24.592019191Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 28 01:31:24.594131 containerd[1612]: time="2026-01-28T01:31:24.592037625Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 28 01:31:24.594131 containerd[1612]: time="2026-01-28T01:31:24.592054877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 28 01:31:24.594131 containerd[1612]: time="2026-01-28T01:31:24.592074013Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 28 01:31:24.594131 containerd[1612]: time="2026-01-28T01:31:24.592089271Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 28 01:31:24.594131 containerd[1612]: time="2026-01-28T01:31:24.592103768Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 28 01:31:24.594131 containerd[1612]: time="2026-01-28T01:31:24.592203064Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 28 01:31:24.598123 containerd[1612]: time="2026-01-28T01:31:24.596567956Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 28 01:31:24.598123 containerd[1612]: time="2026-01-28T01:31:24.596605847Z" level=info msg="Start snapshots syncer" Jan 28 01:31:24.598123 containerd[1612]: time="2026-01-28T01:31:24.597415428Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 28 01:31:24.598643 containerd[1612]: time="2026-01-28T01:31:24.598527494Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 28 01:31:24.599170 containerd[1612]: time="2026-01-28T01:31:24.598940736Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 28 01:31:24.599170 containerd[1612]: time="2026-01-28T01:31:24.599015806Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 28 01:31:24.599250 containerd[1612]: time="2026-01-28T01:31:24.599196463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 28 01:31:24.599250 containerd[1612]: time="2026-01-28T01:31:24.599228112Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 28 01:31:24.599250 containerd[1612]: time="2026-01-28T01:31:24.599243781Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 28 01:31:24.599627 containerd[1612]: time="2026-01-28T01:31:24.599389243Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 28 01:31:24.599627 containerd[1612]: time="2026-01-28T01:31:24.599411885Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 28 01:31:24.599627 containerd[1612]: time="2026-01-28T01:31:24.599425721Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 28 01:31:24.599627 containerd[1612]: time="2026-01-28T01:31:24.599439697Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 28 01:31:24.599758 containerd[1612]: time="2026-01-28T01:31:24.599711665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 28 01:31:24.605960 containerd[1612]: time="2026-01-28T01:31:24.602637898Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 28 01:31:24.605960 containerd[1612]: time="2026-01-28T01:31:24.602862317Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 01:31:24.605960 containerd[1612]: time="2026-01-28T01:31:24.602890910Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 01:31:24.605960 containerd[1612]: time="2026-01-28T01:31:24.602905047Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 01:31:24.605960 containerd[1612]: time="2026-01-28T01:31:24.602917680Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 01:31:24.605960 containerd[1612]: time="2026-01-28T01:31:24.602932488Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 28 01:31:24.605960 containerd[1612]: time="2026-01-28T01:31:24.602946053Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 28 01:31:24.605960 containerd[1612]: time="2026-01-28T01:31:24.602960721Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 28 01:31:24.605960 containerd[1612]: time="2026-01-28T01:31:24.603381436Z" level=info msg="runtime interface created" Jan 28 01:31:24.605960 containerd[1612]: time="2026-01-28T01:31:24.603393699Z" level=info msg="created NRI interface" Jan 28 01:31:24.605960 containerd[1612]: time="2026-01-28T01:31:24.603422352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 28 01:31:24.605960 containerd[1612]: time="2026-01-28T01:31:24.603441067Z" level=info msg="Connect containerd service" Jan 28 01:31:24.605960 containerd[1612]: time="2026-01-28T01:31:24.603542437Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 01:31:24.661884 containerd[1612]: time="2026-01-28T01:31:24.660794380Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 01:31:25.782605 tar[1609]: linux-amd64/README.md Jan 28 01:31:26.066144 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 01:31:26.872968 containerd[1612]: time="2026-01-28T01:31:26.862956141Z" level=info msg="Start subscribing containerd event" Jan 28 01:31:26.882076 containerd[1612]: time="2026-01-28T01:31:26.878792136Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 01:31:26.882076 containerd[1612]: time="2026-01-28T01:31:26.878962704Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 01:31:26.885453 containerd[1612]: time="2026-01-28T01:31:26.872249541Z" level=info msg="Start recovering state" Jan 28 01:31:26.885453 containerd[1612]: time="2026-01-28T01:31:26.883185467Z" level=info msg="Start event monitor" Jan 28 01:31:26.885453 containerd[1612]: time="2026-01-28T01:31:26.883392133Z" level=info msg="Start cni network conf syncer for default" Jan 28 01:31:26.885453 containerd[1612]: time="2026-01-28T01:31:26.883410348Z" level=info msg="Start streaming server" Jan 28 01:31:26.885453 containerd[1612]: time="2026-01-28T01:31:26.883548675Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 28 01:31:26.885453 containerd[1612]: time="2026-01-28T01:31:26.883565377Z" level=info msg="runtime interface starting up..." Jan 28 01:31:26.885453 containerd[1612]: time="2026-01-28T01:31:26.883635809Z" level=info msg="starting plugins..." Jan 28 01:31:26.885453 containerd[1612]: time="2026-01-28T01:31:26.883787502Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 28 01:31:26.885453 containerd[1612]: time="2026-01-28T01:31:26.885055268Z" level=info msg="containerd successfully booted in 2.904900s" Jan 28 01:31:26.898599 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 01:31:31.639630 systemd[1]: Started sshd@1-10.0.0.88:22-10.0.0.1:45082.service - OpenSSH per-connection server daemon (10.0.0.1:45082). Jan 28 01:31:31.746643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:31:31.750076 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 01:31:31.753635 systemd[1]: Startup finished in 24.441s (kernel) + 51.983s (initrd) + 50.637s (userspace) = 2min 7.061s. Jan 28 01:31:31.777458 (kubelet)[1724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:31:31.800101 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 45082 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:31:31.828469 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:31.878666 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 01:31:31.887134 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 01:31:31.906337 systemd-logind[1594]: New session 1 of user core. Jan 28 01:31:32.112830 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 01:31:32.141956 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 01:31:32.384327 (systemd)[1729]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:32.424964 systemd-logind[1594]: New session 2 of user core. Jan 28 01:31:33.262907 systemd[1729]: Queued start job for default target default.target. Jan 28 01:31:33.272432 systemd[1729]: Created slice app.slice - User Application Slice. Jan 28 01:31:33.272554 systemd[1729]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 28 01:31:33.272578 systemd[1729]: Reached target paths.target - Paths. Jan 28 01:31:33.273424 systemd[1729]: Reached target timers.target - Timers. Jan 28 01:31:33.278850 systemd[1729]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 01:31:33.283652 systemd[1729]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 28 01:31:33.377714 systemd[1729]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 01:31:33.378120 systemd[1729]: Reached target sockets.target - Sockets. Jan 28 01:31:33.384116 systemd[1729]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 28 01:31:33.385076 systemd[1729]: Reached target basic.target - Basic System. Jan 28 01:31:33.385893 systemd[1729]: Reached target default.target - Main User Target. Jan 28 01:31:33.387001 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 01:31:33.388976 systemd[1729]: Startup finished in 909ms. Jan 28 01:31:33.407722 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 01:31:33.508047 systemd[1]: Started sshd@2-10.0.0.88:22-10.0.0.1:41814.service - OpenSSH per-connection server daemon (10.0.0.1:41814). Jan 28 01:31:33.923744 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 41814 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:31:33.935636 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:34.000174 systemd-logind[1594]: New session 3 of user core. Jan 28 01:31:34.017869 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 01:31:34.184758 sshd[1752]: Connection closed by 10.0.0.1 port 41814 Jan 28 01:31:34.189554 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Jan 28 01:31:34.228624 systemd[1]: sshd@2-10.0.0.88:22-10.0.0.1:41814.service: Deactivated successfully. Jan 28 01:31:34.236347 systemd[1]: session-3.scope: Deactivated successfully. Jan 28 01:31:34.239832 systemd-logind[1594]: Session 3 logged out. Waiting for processes to exit. Jan 28 01:31:34.245892 systemd-logind[1594]: Removed session 3. Jan 28 01:31:34.251771 systemd[1]: Started sshd@3-10.0.0.88:22-10.0.0.1:41828.service - OpenSSH per-connection server daemon (10.0.0.1:41828). Jan 28 01:31:34.518893 sshd[1758]: Accepted publickey for core from 10.0.0.1 port 41828 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:31:34.522929 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:34.557388 systemd-logind[1594]: New session 4 of user core. Jan 28 01:31:34.573438 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 01:31:34.637950 sshd[1764]: Connection closed by 10.0.0.1 port 41828 Jan 28 01:31:34.637177 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Jan 28 01:31:34.653707 systemd[1]: sshd@3-10.0.0.88:22-10.0.0.1:41828.service: Deactivated successfully. Jan 28 01:31:34.659411 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 01:31:34.667938 systemd-logind[1594]: Session 4 logged out. Waiting for processes to exit. Jan 28 01:31:34.671759 systemd[1]: Started sshd@4-10.0.0.88:22-10.0.0.1:41844.service - OpenSSH per-connection server daemon (10.0.0.1:41844). Jan 28 01:31:34.672771 systemd-logind[1594]: Removed session 4. Jan 28 01:31:34.908373 sshd[1770]: Accepted publickey for core from 10.0.0.1 port 41844 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:31:34.914759 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:35.100168 systemd-logind[1594]: New session 5 of user core. Jan 28 01:31:35.113412 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 01:31:35.287833 sshd[1775]: Connection closed by 10.0.0.1 port 41844 Jan 28 01:31:35.277615 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Jan 28 01:31:35.670087 systemd[1]: sshd@4-10.0.0.88:22-10.0.0.1:41844.service: Deactivated successfully. Jan 28 01:31:35.686180 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 01:31:35.695331 systemd-logind[1594]: Session 5 logged out. Waiting for processes to exit. Jan 28 01:31:35.708891 systemd[1]: Started sshd@5-10.0.0.88:22-10.0.0.1:41852.service - OpenSSH per-connection server daemon (10.0.0.1:41852). Jan 28 01:31:35.717597 systemd-logind[1594]: Removed session 5. Jan 28 01:31:36.087950 kubelet[1724]: E0128 01:31:36.086242 1724 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:31:36.098196 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:31:36.098783 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:31:36.102425 systemd[1]: kubelet.service: Consumed 4.962s CPU time, 268.9M memory peak. Jan 28 01:31:36.119005 sshd[1781]: Accepted publickey for core from 10.0.0.1 port 41852 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:31:36.124000 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:36.192998 systemd-logind[1594]: New session 6 of user core. Jan 28 01:31:36.207910 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 01:31:36.454943 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 01:31:36.460609 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:31:36.524905 sudo[1787]: pam_unix(sudo:session): session closed for user root Jan 28 01:31:36.558748 sshd[1786]: Connection closed by 10.0.0.1 port 41852 Jan 28 01:31:36.562883 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Jan 28 01:31:36.642673 systemd[1]: sshd@5-10.0.0.88:22-10.0.0.1:41852.service: Deactivated successfully. Jan 28 01:31:36.648765 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 01:31:36.673941 systemd-logind[1594]: Session 6 logged out. Waiting for processes to exit. Jan 28 01:31:36.682047 systemd[1]: Started sshd@6-10.0.0.88:22-10.0.0.1:41866.service - OpenSSH per-connection server daemon (10.0.0.1:41866). Jan 28 01:31:36.691653 systemd-logind[1594]: Removed session 6. Jan 28 01:31:37.055701 sshd[1794]: Accepted publickey for core from 10.0.0.1 port 41866 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:31:37.069656 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:37.122725 systemd-logind[1594]: New session 7 of user core. Jan 28 01:31:37.149988 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 01:31:37.304823 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 01:31:37.308793 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:31:37.366814 sudo[1800]: pam_unix(sudo:session): session closed for user root Jan 28 01:31:37.420202 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 28 01:31:37.423857 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:31:37.495075 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 01:31:37.807000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 28 01:31:37.814177 augenrules[1824]: No rules Jan 28 01:31:37.807000 audit[1824]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffedd861f90 a2=420 a3=0 items=0 ppid=1805 pid=1824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:37.845497 kernel: audit: type=1305 audit(1769563897.807:230): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 28 01:31:37.845575 kernel: audit: type=1300 audit(1769563897.807:230): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffedd861f90 a2=420 a3=0 items=0 ppid=1805 pid=1824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:37.842665 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 01:31:37.843196 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 01:31:37.860238 sudo[1799]: pam_unix(sudo:session): session closed for user root Jan 28 01:31:37.869109 sshd[1798]: Connection closed by 10.0.0.1 port 41866 Jan 28 01:31:37.872452 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Jan 28 01:31:37.807000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 01:31:37.920479 kernel: audit: type=1327 audit(1769563897.807:230): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 01:31:37.949476 kernel: audit: type=1130 audit(1769563897.839:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:37.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:37.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:38.048909 kernel: audit: type=1131 audit(1769563897.839:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:38.049011 kernel: audit: type=1106 audit(1769563897.860:233): pid=1799 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:31:37.860000 audit[1799]: USER_END pid=1799 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:31:38.079439 systemd[1]: sshd@6-10.0.0.88:22-10.0.0.1:41866.service: Deactivated successfully. Jan 28 01:31:37.860000 audit[1799]: CRED_DISP pid=1799 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:31:38.108966 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 01:31:38.126745 systemd-logind[1594]: Session 7 logged out. Waiting for processes to exit. Jan 28 01:31:37.889000 audit[1794]: USER_END pid=1794 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:38.163847 kernel: audit: type=1104 audit(1769563897.860:234): pid=1799 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:31:38.163973 kernel: audit: type=1106 audit(1769563897.889:235): pid=1794 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:38.164008 kernel: audit: type=1104 audit(1769563897.889:236): pid=1794 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:37.889000 audit[1794]: CRED_DISP pid=1794 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:38.158904 systemd[1]: Started sshd@7-10.0.0.88:22-10.0.0.1:41872.service - OpenSSH per-connection server daemon (10.0.0.1:41872). Jan 28 01:31:38.166370 systemd-logind[1594]: Removed session 7. Jan 28 01:31:38.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.88:22-10.0.0.1:41866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:38.258715 kernel: audit: type=1131 audit(1769563898.082:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.88:22-10.0.0.1:41866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:38.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.88:22-10.0.0.1:41872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:38.421000 audit[1833]: USER_ACCT pid=1833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:38.424895 sshd[1833]: Accepted publickey for core from 10.0.0.1 port 41872 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:31:38.424000 audit[1833]: CRED_ACQ pid=1833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:38.424000 audit[1833]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0f914ad0 a2=3 a3=0 items=0 ppid=1 pid=1833 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:38.424000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:31:38.427874 sshd-session[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:38.462987 systemd-logind[1594]: New session 8 of user core. Jan 28 01:31:38.482850 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 01:31:38.500000 audit[1833]: USER_START pid=1833 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:38.504000 audit[1838]: CRED_ACQ pid=1838 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:38.524619 sudo[1839]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 01:31:38.523000 audit[1839]: USER_ACCT pid=1839 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:31:38.523000 audit[1839]: CRED_REFR pid=1839 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:31:38.526000 audit[1839]: USER_START pid=1839 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:31:38.525962 sudo[1839]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:31:43.099856 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 01:31:43.127173 (dockerd)[1861]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 01:31:46.194906 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 01:31:46.199864 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:31:48.191503 dockerd[1861]: time="2026-01-28T01:31:48.190518226Z" level=info msg="Starting up" Jan 28 01:31:48.202552 dockerd[1861]: time="2026-01-28T01:31:48.193819358Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 28 01:31:48.720051 dockerd[1861]: time="2026-01-28T01:31:48.719798605Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 28 01:31:48.862117 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:31:48.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:48.870071 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 28 01:31:48.870166 kernel: audit: type=1130 audit(1769563908.862:247): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:48.910383 (kubelet)[1892]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:31:49.104855 systemd[1]: var-lib-docker-metacopy\x2dcheck1332512044-merged.mount: Deactivated successfully. Jan 28 01:31:49.191202 kubelet[1892]: E0128 01:31:49.190359 1892 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:31:49.209599 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:31:49.209963 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:31:49.216375 systemd[1]: kubelet.service: Consumed 938ms CPU time, 108.3M memory peak. Jan 28 01:31:49.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:31:49.252346 kernel: audit: type=1131 audit(1769563909.214:248): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:31:49.296174 dockerd[1861]: time="2026-01-28T01:31:49.294109778Z" level=info msg="Loading containers: start." Jan 28 01:31:49.381090 kernel: Initializing XFRM netlink socket Jan 28 01:31:50.191000 audit[1929]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:50.255414 kernel: audit: type=1325 audit(1769563910.191:249): table=nat:2 family=2 entries=2 op=nft_register_chain pid=1929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:50.191000 audit[1929]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff64e9af30 a2=0 a3=0 items=0 ppid=1861 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:50.301146 kernel: audit: type=1300 audit(1769563910.191:249): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff64e9af30 a2=0 a3=0 items=0 ppid=1861 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:50.301385 kernel: audit: type=1327 audit(1769563910.191:249): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 28 01:31:50.191000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 28 01:31:50.219000 audit[1931]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:50.318502 kernel: audit: type=1325 audit(1769563910.219:250): table=filter:3 family=2 entries=2 op=nft_register_chain pid=1931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:50.318618 kernel: audit: type=1300 audit(1769563910.219:250): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdb16753b0 a2=0 a3=0 items=0 ppid=1861 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:50.219000 audit[1931]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdb16753b0 a2=0 a3=0 items=0 ppid=1861 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:50.359816 kernel: audit: type=1327 audit(1769563910.219:250): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 28 01:31:50.219000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 28 01:31:50.258000 audit[1933]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:50.387641 kernel: audit: type=1325 audit(1769563910.258:251): table=filter:4 family=2 entries=1 op=nft_register_chain pid=1933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:50.399884 kernel: audit: type=1300 audit(1769563910.258:251): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8c913960 a2=0 a3=0 items=0 ppid=1861 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:50.258000 audit[1933]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8c913960 a2=0 a3=0 items=0 ppid=1861 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:50.258000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 28 01:31:50.274000 audit[1935]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:50.274000 audit[1935]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff950e6a90 a2=0 a3=0 items=0 ppid=1861 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:50.274000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 28 01:31:50.292000 audit[1937]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1937 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:50.292000 audit[1937]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff06e4fe40 a2=0 a3=0 items=0 ppid=1861 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:50.292000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 28 01:31:50.329000 audit[1939]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1939 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:50.329000 audit[1939]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdad3ce9e0 a2=0 a3=0 items=0 ppid=1861 pid=1939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:50.329000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 01:31:50.364000 audit[1941]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:50.364000 audit[1941]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffef601dfb0 a2=0 a3=0 items=0 ppid=1861 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:50.364000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 01:31:50.409000 audit[1943]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:50.409000 audit[1943]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffedd448230 a2=0 a3=0 items=0 ppid=1861 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:50.409000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 28 01:31:50.919000 audit[1946]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1946 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:50.919000 audit[1946]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fff7bb988a0 a2=0 a3=0 items=0 ppid=1861 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:50.919000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 28 01:31:50.992000 audit[1948]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1948 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:50.992000 audit[1948]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffeede73710 a2=0 a3=0 items=0 ppid=1861 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:50.992000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 28 01:31:51.010000 audit[1950]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1950 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:51.010000 audit[1950]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff11ba3d80 a2=0 a3=0 items=0 ppid=1861 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:51.010000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 28 01:31:51.015000 audit[1952]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:51.015000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe2629e880 a2=0 a3=0 items=0 ppid=1861 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:51.015000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 01:31:51.064000 audit[1954]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:51.064000 audit[1954]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd4b8879f0 a2=0 a3=0 items=0 ppid=1861 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:51.064000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 28 01:31:51.417000 audit[1984]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:31:51.417000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc8c789e10 a2=0 a3=0 items=0 ppid=1861 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:51.417000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 28 01:31:51.447000 audit[1986]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1986 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:31:51.447000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd6bdb4830 a2=0 a3=0 items=0 ppid=1861 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:51.447000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 28 01:31:51.477000 audit[1988]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:31:51.477000 audit[1988]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5af9d2b0 a2=0 a3=0 items=0 ppid=1861 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:51.477000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 28 01:31:51.512000 audit[1990]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:31:51.512000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdbc0a69c0 a2=0 a3=0 items=0 ppid=1861 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:51.512000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 28 01:31:51.536000 audit[1992]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:31:51.536000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff97578330 a2=0 a3=0 items=0 ppid=1861 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:51.536000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 28 01:31:51.581000 audit[1994]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:31:51.581000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc3145f880 a2=0 a3=0 items=0 ppid=1861 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:51.581000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 01:31:51.618000 audit[1996]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:31:51.618000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffda5b91850 a2=0 a3=0 items=0 ppid=1861 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:51.618000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 01:31:51.668000 audit[1998]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:31:51.668000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff21550aa0 a2=0 a3=0 items=0 ppid=1861 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:51.668000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 28 01:31:51.700000 audit[2000]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:31:51.700000 audit[2000]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7fffa674c9c0 a2=0 a3=0 items=0 ppid=1861 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:51.700000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 28 01:31:51.721000 audit[2002]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:31:51.721000 audit[2002]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffee9540e0 a2=0 a3=0 items=0 ppid=1861 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:51.721000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 28 01:31:51.742000 audit[2004]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:31:51.742000 audit[2004]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd820636a0 a2=0 a3=0 items=0 ppid=1861 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:51.742000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 28 01:31:51.762000 audit[2006]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:31:51.762000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffda44e31a0 a2=0 a3=0 items=0 ppid=1861 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:51.762000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 01:31:51.779000 audit[2008]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:31:51.779000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe34ef1130 a2=0 a3=0 items=0 ppid=1861 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:51.779000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 28 01:31:51.804000 audit[2013]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:51.804000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcf1a88a00 a2=0 a3=0 items=0 ppid=1861 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:51.804000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 28 01:31:51.821000 audit[2015]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:51.821000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc93a97f10 a2=0 a3=0 items=0 ppid=1861 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:51.821000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 28 01:31:51.838000 audit[2017]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:51.838000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff1ec17f60 a2=0 a3=0 items=0 ppid=1861 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:51.838000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 28 01:31:51.851000 audit[2019]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2019 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:31:51.851000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffffdc693b0 a2=0 a3=0 items=0 ppid=1861 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:51.851000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 28 01:31:51.872000 audit[2021]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:31:51.872000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff2c2d8340 a2=0 a3=0 items=0 ppid=1861 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:51.872000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 28 01:31:51.897000 audit[2023]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2023 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:31:51.897000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff433e0fe0 a2=0 a3=0 items=0 ppid=1861 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:51.897000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 28 01:31:52.106000 audit[2028]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:52.106000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffcd293f290 a2=0 a3=0 items=0 ppid=1861 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:52.106000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 28 01:31:52.146000 audit[2030]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:52.146000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff7ebe8fb0 a2=0 a3=0 items=0 ppid=1861 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:52.146000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 28 01:31:52.226000 audit[2038]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:52.226000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffd80db3e50 a2=0 a3=0 items=0 ppid=1861 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:52.226000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 28 01:31:52.319000 audit[2044]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:52.319000 audit[2044]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc110d1c40 a2=0 a3=0 items=0 ppid=1861 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:52.319000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 28 01:31:52.352000 audit[2046]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:52.352000 audit[2046]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fff2c104e60 a2=0 a3=0 items=0 ppid=1861 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:52.352000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 28 01:31:52.369000 audit[2048]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:52.369000 audit[2048]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff6e395690 a2=0 a3=0 items=0 ppid=1861 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:52.369000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 28 01:31:52.385000 audit[2050]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:52.385000 audit[2050]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe8811a930 a2=0 a3=0 items=0 ppid=1861 pid=2050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:52.385000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 01:31:52.403000 audit[2052]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:31:52.403000 audit[2052]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd976e3300 a2=0 a3=0 items=0 ppid=1861 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:52.403000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 28 01:31:52.420359 systemd-networkd[1516]: docker0: Link UP Jan 28 01:31:52.459637 dockerd[1861]: time="2026-01-28T01:31:52.458354041Z" level=info msg="Loading containers: done." Jan 28 01:31:52.577101 dockerd[1861]: time="2026-01-28T01:31:52.575422124Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 01:31:52.578017 dockerd[1861]: time="2026-01-28T01:31:52.576344729Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 28 01:31:52.579035 dockerd[1861]: time="2026-01-28T01:31:52.578465642Z" level=info msg="Initializing buildkit" Jan 28 01:31:52.743392 dockerd[1861]: time="2026-01-28T01:31:52.742990831Z" level=info msg="Completed buildkit initialization" Jan 28 01:31:52.776692 dockerd[1861]: time="2026-01-28T01:31:52.774677457Z" level=info msg="Daemon has completed initialization" Jan 28 01:31:52.776692 dockerd[1861]: time="2026-01-28T01:31:52.774886922Z" level=info msg="API listen on /run/docker.sock" Jan 28 01:31:52.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:52.778175 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 01:31:55.902896 containerd[1612]: time="2026-01-28T01:31:55.901198795Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 28 01:31:57.807797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1850601504.mount: Deactivated successfully. Jan 28 01:31:59.561106 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 01:31:59.919519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:32:02.726079 update_engine[1599]: I20260128 01:32:02.715564 1599 update_attempter.cc:509] Updating boot flags... Jan 28 01:32:04.599567 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:32:04.669512 kernel: kauditd_printk_skb: 113 callbacks suppressed Jan 28 01:32:04.671395 kernel: audit: type=1130 audit(1769563924.606:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:04.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:04.698897 (kubelet)[2160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:32:08.754374 kubelet[2160]: E0128 01:32:08.754116 2160 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:32:08.771518 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:32:08.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:32:08.773085 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:32:08.775519 systemd[1]: kubelet.service: Consumed 1.813s CPU time, 110M memory peak. Jan 28 01:32:08.827926 kernel: audit: type=1131 audit(1769563928.775:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:32:18.462795 containerd[1612]: time="2026-01-28T01:32:18.447635674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:32:18.472399 containerd[1612]: time="2026-01-28T01:32:18.468970825Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=28502313" Jan 28 01:32:18.476877 containerd[1612]: time="2026-01-28T01:32:18.476611415Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:32:18.502652 containerd[1612]: time="2026-01-28T01:32:18.502453606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:32:18.504528 containerd[1612]: time="2026-01-28T01:32:18.503760669Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 22.602359648s" Jan 28 01:32:18.504528 containerd[1612]: time="2026-01-28T01:32:18.503805162Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 28 01:32:18.513814 containerd[1612]: time="2026-01-28T01:32:18.512688935Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 28 01:32:18.964373 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 01:32:18.981017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:32:22.156378 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:32:22.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:22.194688 kernel: audit: type=1130 audit(1769563942.159:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:22.219178 (kubelet)[2190]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:32:23.869557 kubelet[2190]: E0128 01:32:23.868432 2190 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:32:23.878049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:32:23.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:32:23.878704 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:32:23.880395 systemd[1]: kubelet.service: Consumed 1.231s CPU time, 110.3M memory peak. Jan 28 01:32:23.915136 kernel: audit: type=1131 audit(1769563943.878:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:32:31.228842 containerd[1612]: time="2026-01-28T01:32:31.227747169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:32:31.231830 containerd[1612]: time="2026-01-28T01:32:31.231714521Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 28 01:32:31.237740 containerd[1612]: time="2026-01-28T01:32:31.237665046Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:32:31.248641 containerd[1612]: time="2026-01-28T01:32:31.247895086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:32:31.262575 containerd[1612]: time="2026-01-28T01:32:31.258820730Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 12.746090457s" Jan 28 01:32:31.268778 containerd[1612]: time="2026-01-28T01:32:31.262860929Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 28 01:32:31.276469 containerd[1612]: time="2026-01-28T01:32:31.275036999Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 28 01:32:33.907610 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 28 01:32:33.926177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:32:35.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:35.113935 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:32:35.174016 kernel: audit: type=1130 audit(1769563955.109:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:35.185805 (kubelet)[2214]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:32:35.558699 kubelet[2214]: E0128 01:32:35.526467 2214 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:32:35.597218 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:32:35.601422 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:32:35.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:32:35.605865 systemd[1]: kubelet.service: Consumed 541ms CPU time, 109.9M memory peak. Jan 28 01:32:35.672446 kernel: audit: type=1131 audit(1769563955.602:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:32:43.912525 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1407343849 wd_nsec: 1407343664 Jan 28 01:32:45.806209 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 28 01:32:45.858598 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:32:49.587383 containerd[1612]: time="2026-01-28T01:32:49.580843903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:32:49.636081 containerd[1612]: time="2026-01-28T01:32:49.619436757Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19400173" Jan 28 01:32:49.797801 containerd[1612]: time="2026-01-28T01:32:49.786073556Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:32:50.094192 containerd[1612]: time="2026-01-28T01:32:50.092619468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:32:50.106451 containerd[1612]: time="2026-01-28T01:32:50.097689676Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 18.822594468s" Jan 28 01:32:50.106451 containerd[1612]: time="2026-01-28T01:32:50.097835576Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 28 01:32:50.116668 containerd[1612]: time="2026-01-28T01:32:50.115949608Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 28 01:32:50.273099 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:32:50.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:50.361783 kernel: audit: type=1130 audit(1769563970.271:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:50.399204 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:32:51.393825 kubelet[2232]: E0128 01:32:51.392078 2232 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:32:51.584793 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:32:51.675250 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:32:51.797418 systemd[1]: kubelet.service: Consumed 1.821s CPU time, 110.8M memory peak. Jan 28 01:32:51.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:32:51.901849 kernel: audit: type=1131 audit(1769563971.795:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:33:01.882782 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 28 01:33:01.905160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:33:05.611369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2277610546.mount: Deactivated successfully. Jan 28 01:33:06.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:06.154566 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:33:06.207235 kernel: audit: type=1130 audit(1769563986.153:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:06.218019 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:33:06.713999 kubelet[2257]: E0128 01:33:06.713494 2257 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:33:06.729186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:33:06.730776 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:33:06.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:33:06.734148 systemd[1]: kubelet.service: Consumed 1.421s CPU time, 110.3M memory peak. Jan 28 01:33:06.769043 kernel: audit: type=1131 audit(1769563986.732:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:33:13.390186 containerd[1612]: time="2026-01-28T01:33:13.388678061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:33:13.443741 containerd[1612]: time="2026-01-28T01:33:13.397724321Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31159536" Jan 28 01:33:13.443741 containerd[1612]: time="2026-01-28T01:33:13.399154887Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:33:13.443741 containerd[1612]: time="2026-01-28T01:33:13.435379427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:33:13.443741 containerd[1612]: time="2026-01-28T01:33:13.436751156Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 23.320695109s" Jan 28 01:33:13.443741 containerd[1612]: time="2026-01-28T01:33:13.436880556Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 28 01:33:13.483668 containerd[1612]: time="2026-01-28T01:33:13.482130801Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 28 01:33:15.572083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount744683276.mount: Deactivated successfully. Jan 28 01:33:16.908518 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 28 01:33:16.923949 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:33:18.802895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:33:18.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:18.864462 kernel: audit: type=1130 audit(1769563998.804:300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:18.936393 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:33:24.171786 kubelet[2285]: E0128 01:33:24.168942 2285 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:33:24.196023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:33:24.197164 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:33:24.200835 systemd[1]: kubelet.service: Consumed 1.668s CPU time, 110.4M memory peak. Jan 28 01:33:24.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:33:24.248974 kernel: audit: type=1131 audit(1769564004.199:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:33:29.474852 containerd[1612]: time="2026-01-28T01:33:29.473369853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:33:29.479821 containerd[1612]: time="2026-01-28T01:33:29.479659069Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18478032" Jan 28 01:33:29.487944 containerd[1612]: time="2026-01-28T01:33:29.487815723Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:33:29.515849 containerd[1612]: time="2026-01-28T01:33:29.514790512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:33:29.515849 containerd[1612]: time="2026-01-28T01:33:29.515402468Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 16.032994629s" Jan 28 01:33:29.515849 containerd[1612]: time="2026-01-28T01:33:29.515476867Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 28 01:33:29.522991 containerd[1612]: time="2026-01-28T01:33:29.522195406Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 28 01:33:31.814483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3243318970.mount: Deactivated successfully. Jan 28 01:33:31.969355 containerd[1612]: time="2026-01-28T01:33:31.967453412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:33:31.984708 containerd[1612]: time="2026-01-28T01:33:31.977552193Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 28 01:33:31.986430 containerd[1612]: time="2026-01-28T01:33:31.986350571Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:33:32.023077 containerd[1612]: time="2026-01-28T01:33:32.007537759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:33:32.358999 containerd[1612]: time="2026-01-28T01:33:32.323980869Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.801557687s" Jan 28 01:33:32.358999 containerd[1612]: time="2026-01-28T01:33:32.332985651Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 28 01:33:32.376803 containerd[1612]: time="2026-01-28T01:33:32.373913923Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 28 01:33:34.508542 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 28 01:33:34.618029 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:33:35.143210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2216084770.mount: Deactivated successfully. Jan 28 01:33:37.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:37.585122 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:33:37.634251 kernel: audit: type=1130 audit(1769564017.584:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:37.675443 (kubelet)[2357]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:33:38.231148 kubelet[2357]: E0128 01:33:38.230455 2357 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:33:38.264182 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:33:38.265551 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:33:38.271076 systemd[1]: kubelet.service: Consumed 1.050s CPU time, 109M memory peak. Jan 28 01:33:38.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:33:38.306756 kernel: audit: type=1131 audit(1769564018.269:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:33:48.434482 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 28 01:33:48.467511 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:33:50.219066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:33:50.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:50.259329 kernel: audit: type=1130 audit(1769564030.214:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:50.278531 (kubelet)[2413]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:33:50.845932 kubelet[2413]: E0128 01:33:50.838462 2413 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:33:50.858764 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:33:50.859042 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:33:50.866870 systemd[1]: kubelet.service: Consumed 970ms CPU time, 110.2M memory peak. Jan 28 01:33:50.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:33:50.908335 kernel: audit: type=1131 audit(1769564030.864:305): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:33:55.607621 containerd[1612]: time="2026-01-28T01:33:55.606351819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:33:55.607621 containerd[1612]: time="2026-01-28T01:33:55.613249046Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=56514996" Jan 28 01:33:55.632614 containerd[1612]: time="2026-01-28T01:33:55.624939102Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:33:55.650016 containerd[1612]: time="2026-01-28T01:33:55.648608968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:33:55.654029 containerd[1612]: time="2026-01-28T01:33:55.653411697Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 23.279448722s" Jan 28 01:33:55.654029 containerd[1612]: time="2026-01-28T01:33:55.653460768Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 28 01:34:00.948525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 28 01:34:00.982379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:34:01.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:01.864145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:34:01.894000 kernel: audit: type=1130 audit(1769564041.862:306): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:02.084957 (kubelet)[2455]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:34:02.594156 kubelet[2455]: E0128 01:34:02.593660 2455 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:34:02.615715 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:34:02.616100 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:34:02.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:34:02.619139 systemd[1]: kubelet.service: Consumed 744ms CPU time, 110.5M memory peak. Jan 28 01:34:02.653907 kernel: audit: type=1131 audit(1769564042.617:307): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:34:03.080385 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:34:03.080722 systemd[1]: kubelet.service: Consumed 744ms CPU time, 110.5M memory peak. Jan 28 01:34:03.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:03.109049 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:34:03.159470 kernel: audit: type=1130 audit(1769564043.079:308): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:03.159605 kernel: audit: type=1131 audit(1769564043.079:309): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:03.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:03.417012 systemd[1]: Reload requested from client PID 2470 ('systemctl') (unit session-8.scope)... Jan 28 01:34:03.417094 systemd[1]: Reloading... Jan 28 01:34:04.088109 zram_generator::config[2524]: No configuration found. Jan 28 01:34:05.387369 systemd[1]: Reloading finished in 1964 ms. Jan 28 01:34:05.479000 audit: BPF prog-id=63 op=LOAD Jan 28 01:34:05.492594 kernel: audit: type=1334 audit(1769564045.479:310): prog-id=63 op=LOAD Jan 28 01:34:05.479000 audit: BPF prog-id=49 op=UNLOAD Jan 28 01:34:05.479000 audit: BPF prog-id=64 op=LOAD Jan 28 01:34:05.479000 audit: BPF prog-id=65 op=LOAD Jan 28 01:34:05.479000 audit: BPF prog-id=50 op=UNLOAD Jan 28 01:34:05.479000 audit: BPF prog-id=51 op=UNLOAD Jan 28 01:34:05.493000 audit: BPF prog-id=66 op=LOAD Jan 28 01:34:05.493000 audit: BPF prog-id=46 op=UNLOAD Jan 28 01:34:05.493000 audit: BPF prog-id=67 op=LOAD Jan 28 01:34:05.493000 audit: BPF prog-id=68 op=LOAD Jan 28 01:34:05.493000 audit: BPF prog-id=47 op=UNLOAD Jan 28 01:34:05.493000 audit: BPF prog-id=48 op=UNLOAD Jan 28 01:34:05.520074 kernel: audit: type=1334 audit(1769564045.479:311): prog-id=49 op=UNLOAD Jan 28 01:34:05.520188 kernel: audit: type=1334 audit(1769564045.479:312): prog-id=64 op=LOAD Jan 28 01:34:05.520229 kernel: audit: type=1334 audit(1769564045.479:313): prog-id=65 op=LOAD Jan 28 01:34:05.520484 kernel: audit: type=1334 audit(1769564045.479:314): prog-id=50 op=UNLOAD Jan 28 01:34:05.520524 kernel: audit: type=1334 audit(1769564045.479:315): prog-id=51 op=UNLOAD Jan 28 01:34:05.502000 audit: BPF prog-id=69 op=LOAD Jan 28 01:34:05.502000 audit: BPF prog-id=59 op=UNLOAD Jan 28 01:34:05.510000 audit: BPF prog-id=70 op=LOAD Jan 28 01:34:05.510000 audit: BPF prog-id=43 op=UNLOAD Jan 28 01:34:05.512000 audit: BPF prog-id=71 op=LOAD Jan 28 01:34:05.512000 audit: BPF prog-id=52 op=UNLOAD Jan 28 01:34:05.512000 audit: BPF prog-id=72 op=LOAD Jan 28 01:34:05.519000 audit: BPF prog-id=73 op=LOAD Jan 28 01:34:05.523000 audit: BPF prog-id=53 op=UNLOAD Jan 28 01:34:05.523000 audit: BPF prog-id=54 op=UNLOAD Jan 28 01:34:05.531000 audit: BPF prog-id=74 op=LOAD Jan 28 01:34:05.531000 audit: BPF prog-id=60 op=UNLOAD Jan 28 01:34:05.534000 audit: BPF prog-id=75 op=LOAD Jan 28 01:34:05.534000 audit: BPF prog-id=76 op=LOAD Jan 28 01:34:05.534000 audit: BPF prog-id=61 op=UNLOAD Jan 28 01:34:05.534000 audit: BPF prog-id=62 op=UNLOAD Jan 28 01:34:05.534000 audit: BPF prog-id=77 op=LOAD Jan 28 01:34:05.534000 audit: BPF prog-id=78 op=LOAD Jan 28 01:34:05.534000 audit: BPF prog-id=44 op=UNLOAD Jan 28 01:34:05.534000 audit: BPF prog-id=45 op=UNLOAD Jan 28 01:34:05.547000 audit: BPF prog-id=79 op=LOAD Jan 28 01:34:05.547000 audit: BPF prog-id=55 op=UNLOAD Jan 28 01:34:05.547000 audit: BPF prog-id=80 op=LOAD Jan 28 01:34:05.547000 audit: BPF prog-id=81 op=LOAD Jan 28 01:34:05.551000 audit: BPF prog-id=56 op=UNLOAD Jan 28 01:34:05.551000 audit: BPF prog-id=57 op=UNLOAD Jan 28 01:34:05.554000 audit: BPF prog-id=82 op=LOAD Jan 28 01:34:05.554000 audit: BPF prog-id=58 op=UNLOAD Jan 28 01:34:05.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:34:05.730408 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 01:34:05.730623 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 01:34:05.731361 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:34:05.731463 systemd[1]: kubelet.service: Consumed 348ms CPU time, 98.7M memory peak. Jan 28 01:34:05.767722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:34:07.229613 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:34:07.260919 kernel: kauditd_printk_skb: 35 callbacks suppressed Jan 28 01:34:07.261088 kernel: audit: type=1130 audit(1769564047.233:351): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:07.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:07.379203 (kubelet)[2564]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:34:08.400698 kubelet[2564]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:34:08.400698 kubelet[2564]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:34:08.400698 kubelet[2564]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:34:08.400698 kubelet[2564]: I0128 01:34:08.404161 2564 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:34:10.897674 kubelet[2564]: I0128 01:34:10.896579 2564 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 01:34:10.897674 kubelet[2564]: I0128 01:34:10.896667 2564 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:34:10.904971 kubelet[2564]: I0128 01:34:10.902228 2564 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 01:34:11.291376 kubelet[2564]: I0128 01:34:11.290854 2564 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:34:11.296039 kubelet[2564]: E0128 01:34:11.291886 2564 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.88:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:34:11.375834 kubelet[2564]: I0128 01:34:11.374035 2564 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 01:34:11.435338 kubelet[2564]: I0128 01:34:11.433701 2564 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 01:34:11.435338 kubelet[2564]: I0128 01:34:11.434601 2564 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:34:11.435338 kubelet[2564]: I0128 01:34:11.434653 2564 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 01:34:11.435338 kubelet[2564]: I0128 01:34:11.435014 2564 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:34:11.436427 kubelet[2564]: I0128 01:34:11.435032 2564 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 01:34:11.440561 kubelet[2564]: I0128 01:34:11.439973 2564 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:34:11.465361 kubelet[2564]: I0128 01:34:11.465186 2564 kubelet.go:446] "Attempting to sync node with API server" Jan 28 01:34:11.474533 kubelet[2564]: I0128 01:34:11.469903 2564 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:34:11.474533 kubelet[2564]: I0128 01:34:11.470972 2564 kubelet.go:352] "Adding apiserver pod source" Jan 28 01:34:11.476398 kubelet[2564]: I0128 01:34:11.475551 2564 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:34:11.481463 kubelet[2564]: W0128 01:34:11.481397 2564 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 28 01:34:11.481664 kubelet[2564]: E0128 01:34:11.481633 2564 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:34:11.505457 kubelet[2564]: I0128 01:34:11.505408 2564 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 28 01:34:11.517624 kubelet[2564]: I0128 01:34:11.507460 2564 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 01:34:11.517624 kubelet[2564]: W0128 01:34:11.507612 2564 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 01:34:11.517624 kubelet[2564]: W0128 01:34:11.517023 2564 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 28 01:34:11.517624 kubelet[2564]: E0128 01:34:11.517122 2564 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:34:11.531036 kubelet[2564]: I0128 01:34:11.527480 2564 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 01:34:11.531036 kubelet[2564]: I0128 01:34:11.527524 2564 server.go:1287] "Started kubelet" Jan 28 01:34:11.531036 kubelet[2564]: I0128 01:34:11.529371 2564 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:34:11.536110 kubelet[2564]: I0128 01:34:11.533391 2564 server.go:479] "Adding debug handlers to kubelet server" Jan 28 01:34:11.542036 kubelet[2564]: I0128 01:34:11.538717 2564 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:34:11.560161 kubelet[2564]: I0128 01:34:11.560121 2564 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:34:11.566686 kubelet[2564]: E0128 01:34:11.562148 2564 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:34:11.571571 kubelet[2564]: I0128 01:34:11.564417 2564 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:34:11.571571 kubelet[2564]: E0128 01:34:11.547491 2564 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.88:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.88:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ec1211a99011f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:34:11.527500063 +0000 UTC m=+4.102778803,LastTimestamp:2026-01-28 01:34:11.527500063 +0000 UTC m=+4.102778803,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:34:11.571571 kubelet[2564]: I0128 01:34:11.566205 2564 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:34:11.571571 kubelet[2564]: I0128 01:34:11.567980 2564 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 01:34:11.571571 kubelet[2564]: I0128 01:34:11.568225 2564 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 01:34:11.571571 kubelet[2564]: I0128 01:34:11.568493 2564 reconciler.go:26] "Reconciler: start to sync state" Jan 28 01:34:11.571571 kubelet[2564]: E0128 01:34:11.570909 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:11.579941 kubelet[2564]: E0128 01:34:11.574921 2564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="200ms" Jan 28 01:34:11.579941 kubelet[2564]: W0128 01:34:11.575421 2564 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 28 01:34:11.579941 kubelet[2564]: E0128 01:34:11.575494 2564 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:34:11.596888 kubelet[2564]: I0128 01:34:11.592828 2564 factory.go:221] Registration of the containerd container factory successfully Jan 28 01:34:11.596888 kubelet[2564]: I0128 01:34:11.592903 2564 factory.go:221] Registration of the systemd container factory successfully Jan 28 01:34:11.596888 kubelet[2564]: I0128 01:34:11.593082 2564 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:34:11.624000 audit[2580]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:34:11.672212 kubelet[2564]: E0128 01:34:11.671351 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:11.682237 kernel: audit: type=1325 audit(1769564051.624:352): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:34:11.682542 kernel: audit: type=1300 audit(1769564051.624:352): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffde76ed60 a2=0 a3=0 items=0 ppid=2564 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:11.624000 audit[2580]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffde76ed60 a2=0 a3=0 items=0 ppid=2564 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:11.624000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 01:34:11.752761 kernel: audit: type=1327 audit(1769564051.624:352): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 01:34:11.752952 kernel: audit: type=1325 audit(1769564051.653:353): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2583 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:34:11.653000 audit[2583]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2583 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:34:11.762446 kubelet[2564]: I0128 01:34:11.759631 2564 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:34:11.762446 kubelet[2564]: I0128 01:34:11.759656 2564 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:34:11.762446 kubelet[2564]: I0128 01:34:11.759718 2564 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:34:11.772589 kubelet[2564]: E0128 01:34:11.772556 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:11.782349 kubelet[2564]: E0128 01:34:11.777970 2564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="400ms" Jan 28 01:34:11.782752 kubelet[2564]: I0128 01:34:11.779614 2564 policy_none.go:49] "None policy: Start" Jan 28 01:34:11.783071 kubelet[2564]: I0128 01:34:11.783047 2564 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 01:34:11.783446 kubelet[2564]: I0128 01:34:11.783423 2564 state_mem.go:35] "Initializing new in-memory state store" Jan 28 01:34:11.653000 audit[2583]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc1d740e20 a2=0 a3=0 items=0 ppid=2564 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:11.844585 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 01:34:11.868620 kernel: audit: type=1300 audit(1769564051.653:353): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc1d740e20 a2=0 a3=0 items=0 ppid=2564 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:11.653000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 01:34:11.877482 kubelet[2564]: E0128 01:34:11.877444 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:11.935755 kernel: audit: type=1327 audit(1769564051.653:353): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 01:34:11.935994 kernel: audit: type=1325 audit(1769564051.672:354): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2585 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:34:11.672000 audit[2585]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2585 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:34:11.672000 audit[2585]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff4bad50b0 a2=0 a3=0 items=0 ppid=2564 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:11.969743 kubelet[2564]: I0128 01:34:11.969467 2564 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 01:34:11.978704 kubelet[2564]: E0128 01:34:11.978658 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:12.014144 kubelet[2564]: I0128 01:34:12.010431 2564 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 01:34:12.014144 kubelet[2564]: I0128 01:34:12.010622 2564 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 01:34:12.014144 kubelet[2564]: I0128 01:34:12.010663 2564 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:34:12.014144 kubelet[2564]: I0128 01:34:12.010675 2564 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 01:34:12.014144 kubelet[2564]: E0128 01:34:12.010881 2564 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:34:12.014873 kubelet[2564]: W0128 01:34:12.014667 2564 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 28 01:34:12.014873 kubelet[2564]: E0128 01:34:12.014743 2564 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:34:12.057990 kernel: audit: type=1300 audit(1769564051.672:354): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff4bad50b0 a2=0 a3=0 items=0 ppid=2564 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:12.058131 kernel: audit: type=1327 audit(1769564051.672:354): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 01:34:11.672000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 01:34:11.727000 audit[2587]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2587 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:34:11.727000 audit[2587]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcec42a220 a2=0 a3=0 items=0 ppid=2564 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:11.727000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 01:34:11.961000 audit[2592]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:34:11.961000 audit[2592]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc98ce94c0 a2=0 a3=0 items=0 ppid=2564 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:11.961000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 28 01:34:11.999000 audit[2594]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:34:11.999000 audit[2594]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc1c1fbb90 a2=0 a3=0 items=0 ppid=2564 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:11.999000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 28 01:34:12.007000 audit[2593]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2593 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:34:12.007000 audit[2593]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffefca6be20 a2=0 a3=0 items=0 ppid=2564 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:12.007000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 01:34:12.035000 audit[2597]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2597 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:34:12.035000 audit[2597]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd1875b220 a2=0 a3=0 items=0 ppid=2564 pid=2597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:12.035000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 28 01:34:12.042000 audit[2596]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2596 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:34:12.042000 audit[2596]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffed613e240 a2=0 a3=0 items=0 ppid=2564 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:12.042000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 28 01:34:12.074000 audit[2599]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2599 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:34:12.074000 audit[2599]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff86db04d0 a2=0 a3=0 items=0 ppid=2564 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:12.074000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 28 01:34:12.074000 audit[2598]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:34:12.074000 audit[2598]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe577a4e50 a2=0 a3=0 items=0 ppid=2564 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:12.074000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 28 01:34:12.084145 kubelet[2564]: E0128 01:34:12.084105 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:12.087000 audit[2600]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2600 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:34:12.087000 audit[2600]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffefa967e30 a2=0 a3=0 items=0 ppid=2564 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:12.087000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 28 01:34:12.112498 kubelet[2564]: E0128 01:34:12.111959 2564 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 01:34:12.141550 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 01:34:12.186324 kubelet[2564]: E0128 01:34:12.185498 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:12.188527 kubelet[2564]: E0128 01:34:12.186958 2564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="800ms" Jan 28 01:34:12.189208 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 01:34:12.289474 kubelet[2564]: E0128 01:34:12.286444 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:12.291020 kubelet[2564]: I0128 01:34:12.290009 2564 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 01:34:12.291020 kubelet[2564]: I0128 01:34:12.290626 2564 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:34:12.291020 kubelet[2564]: I0128 01:34:12.290701 2564 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:34:12.327610 kubelet[2564]: I0128 01:34:12.302625 2564 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:34:12.356961 kubelet[2564]: E0128 01:34:12.350978 2564 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:34:12.356961 kubelet[2564]: E0128 01:34:12.351192 2564 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:34:12.364084 kubelet[2564]: W0128 01:34:12.361214 2564 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 28 01:34:12.364084 kubelet[2564]: E0128 01:34:12.361419 2564 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:34:12.389722 kubelet[2564]: I0128 01:34:12.389634 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6edb375440e4b291c7482c8ceb5f2e7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b6edb375440e4b291c7482c8ceb5f2e7\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:34:12.389974 kubelet[2564]: I0128 01:34:12.389951 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6edb375440e4b291c7482c8ceb5f2e7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b6edb375440e4b291c7482c8ceb5f2e7\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:34:12.390069 kubelet[2564]: I0128 01:34:12.390050 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:34:12.390198 kubelet[2564]: I0128 01:34:12.390177 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:34:12.390434 kubelet[2564]: I0128 01:34:12.390412 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:34:12.390536 kubelet[2564]: I0128 01:34:12.390515 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6edb375440e4b291c7482c8ceb5f2e7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b6edb375440e4b291c7482c8ceb5f2e7\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:34:12.390622 kubelet[2564]: I0128 01:34:12.390605 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:34:12.390715 kubelet[2564]: I0128 01:34:12.390698 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:34:12.402231 kubelet[2564]: I0128 01:34:12.393977 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 28 01:34:12.409412 kubelet[2564]: I0128 01:34:12.408229 2564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:34:12.416180 kubelet[2564]: E0128 01:34:12.413760 2564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Jan 28 01:34:12.414156 systemd[1]: Created slice kubepods-burstable-podb6edb375440e4b291c7482c8ceb5f2e7.slice - libcontainer container kubepods-burstable-podb6edb375440e4b291c7482c8ceb5f2e7.slice. Jan 28 01:34:12.446081 kubelet[2564]: E0128 01:34:12.445951 2564 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:34:12.452884 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 28 01:34:12.470181 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 28 01:34:12.481650 kubelet[2564]: E0128 01:34:12.481522 2564 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:34:12.493093 kubelet[2564]: E0128 01:34:12.491945 2564 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:34:12.689477 kubelet[2564]: I0128 01:34:12.686751 2564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:34:12.695727 kubelet[2564]: E0128 01:34:12.690988 2564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Jan 28 01:34:12.779974 kubelet[2564]: E0128 01:34:12.777680 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:12.859754 kubelet[2564]: E0128 01:34:12.809190 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:12.859754 kubelet[2564]: E0128 01:34:12.816511 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:12.869868 kubelet[2564]: W0128 01:34:12.869637 2564 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 28 01:34:12.870170 kubelet[2564]: E0128 01:34:12.870139 2564 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:34:12.924464 containerd[1612]: time="2026-01-28T01:34:12.911759061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 28 01:34:12.924464 containerd[1612]: time="2026-01-28T01:34:12.989013603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 28 01:34:12.924464 containerd[1612]: time="2026-01-28T01:34:12.992441989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b6edb375440e4b291c7482c8ceb5f2e7,Namespace:kube-system,Attempt:0,}" Jan 28 01:34:13.321468 kubelet[2564]: W0128 01:34:13.307702 2564 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 28 01:34:13.321468 kubelet[2564]: E0128 01:34:13.307877 2564 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:34:13.321468 kubelet[2564]: W0128 01:34:13.308144 2564 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 28 01:34:13.321468 kubelet[2564]: E0128 01:34:13.308371 2564 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:34:13.321468 kubelet[2564]: E0128 01:34:13.308139 2564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="1.6s" Jan 28 01:34:13.321468 kubelet[2564]: I0128 01:34:13.320175 2564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:34:13.332047 kubelet[2564]: E0128 01:34:13.323187 2564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Jan 28 01:34:13.411127 kubelet[2564]: E0128 01:34:13.410676 2564 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.88:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:34:14.178897 kubelet[2564]: I0128 01:34:14.167032 2564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:34:14.178897 kubelet[2564]: E0128 01:34:14.167988 2564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Jan 28 01:34:14.180502 containerd[1612]: time="2026-01-28T01:34:14.174386706Z" level=info msg="connecting to shim a56bd6992403f89973ada7ad3355f99bf77cb3f3034dd5fa7cff9dcc493606df" address="unix:///run/containerd/s/0cf8acde16ffb195c6f80d9b444f4d007e850459187eab2f54d297c8d217061b" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:34:14.265999 containerd[1612]: time="2026-01-28T01:34:14.265936308Z" level=info msg="connecting to shim 445b97fe862586ea73433f9721f78906e725171e7b89602fa2c525dea695a384" address="unix:///run/containerd/s/392b35e01d38e94f7fb9c323d815dc51ba765fbf7e43bf6e8a30aeb2572bdd05" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:34:14.267482 containerd[1612]: time="2026-01-28T01:34:14.267443611Z" level=info msg="connecting to shim 646a081358b454bfc4ed890ccfcbe5ac5ad7f82c77034df4b4ca181abd77a0c1" address="unix:///run/containerd/s/9f1abb83d31b9d22e8e0e3b7999dce2fc512d5c9d1c633bf2f7c52ea2b0b2d95" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:34:14.409022 kubelet[2564]: W0128 01:34:14.408432 2564 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 28 01:34:14.409022 kubelet[2564]: E0128 01:34:14.408497 2564 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:34:14.933633 kubelet[2564]: E0128 01:34:14.918376 2564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="3.2s" Jan 28 01:34:14.933633 kubelet[2564]: W0128 01:34:14.925697 2564 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 28 01:34:14.933633 kubelet[2564]: E0128 01:34:14.932968 2564 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:34:14.979992 kubelet[2564]: W0128 01:34:14.979941 2564 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 28 01:34:14.980204 kubelet[2564]: E0128 01:34:14.980176 2564 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:34:15.100250 systemd[1]: Started cri-containerd-646a081358b454bfc4ed890ccfcbe5ac5ad7f82c77034df4b4ca181abd77a0c1.scope - libcontainer container 646a081358b454bfc4ed890ccfcbe5ac5ad7f82c77034df4b4ca181abd77a0c1. Jan 28 01:34:15.205359 systemd[1]: Started cri-containerd-a56bd6992403f89973ada7ad3355f99bf77cb3f3034dd5fa7cff9dcc493606df.scope - libcontainer container a56bd6992403f89973ada7ad3355f99bf77cb3f3034dd5fa7cff9dcc493606df. Jan 28 01:34:15.870434 kubelet[2564]: E0128 01:34:15.807358 2564 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.88:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.88:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ec1211a99011f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:34:11.527500063 +0000 UTC m=+4.102778803,LastTimestamp:2026-01-28 01:34:11.527500063 +0000 UTC m=+4.102778803,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:34:15.870434 kubelet[2564]: W0128 01:34:15.868762 2564 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 28 01:34:15.870434 kubelet[2564]: E0128 01:34:15.869071 2564 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:34:15.819398 systemd[1]: Started cri-containerd-445b97fe862586ea73433f9721f78906e725171e7b89602fa2c525dea695a384.scope - libcontainer container 445b97fe862586ea73433f9721f78906e725171e7b89602fa2c525dea695a384. Jan 28 01:34:15.964000 audit: BPF prog-id=83 op=LOAD Jan 28 01:34:15.994060 kernel: kauditd_printk_skb: 27 callbacks suppressed Jan 28 01:34:16.001712 kernel: audit: type=1334 audit(1769564055.964:364): prog-id=83 op=LOAD Jan 28 01:34:15.965000 audit: BPF prog-id=84 op=LOAD Jan 28 01:34:16.075034 kernel: audit: type=1334 audit(1769564055.965:365): prog-id=84 op=LOAD Jan 28 01:34:15.965000 audit[2655]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2638 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:16.458382 kernel: audit: type=1300 audit(1769564055.965:365): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2638 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:16.463582 kernel: audit: type=1327 audit(1769564055.965:365): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634366130383133353862343534626663346564383930636366636265 Jan 28 01:34:15.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634366130383133353862343534626663346564383930636366636265 Jan 28 01:34:15.965000 audit: BPF prog-id=84 op=UNLOAD Jan 28 01:34:16.680939 kernel: audit: type=1334 audit(1769564055.965:366): prog-id=84 op=UNLOAD Jan 28 01:34:16.689755 kernel: audit: type=1300 audit(1769564055.965:366): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:16.693001 kernel: audit: type=1327 audit(1769564055.965:366): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634366130383133353862343534626663346564383930636366636265 Jan 28 01:34:16.693086 kernel: audit: type=1334 audit(1769564055.972:367): prog-id=85 op=LOAD Jan 28 01:34:16.693129 kernel: audit: type=1300 audit(1769564055.972:367): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2638 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:16.693162 kernel: audit: type=1327 audit(1769564055.972:367): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634366130383133353862343534626663346564383930636366636265 Jan 28 01:34:15.965000 audit[2655]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:15.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634366130383133353862343534626663346564383930636366636265 Jan 28 01:34:15.972000 audit: BPF prog-id=85 op=LOAD Jan 28 01:34:15.972000 audit[2655]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2638 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:15.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634366130383133353862343534626663346564383930636366636265 Jan 28 01:34:15.972000 audit: BPF prog-id=86 op=LOAD Jan 28 01:34:15.972000 audit[2655]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2638 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:15.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634366130383133353862343534626663346564383930636366636265 Jan 28 01:34:15.972000 audit: BPF prog-id=86 op=UNLOAD Jan 28 01:34:15.972000 audit[2655]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:15.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634366130383133353862343534626663346564383930636366636265 Jan 28 01:34:15.972000 audit: BPF prog-id=85 op=UNLOAD Jan 28 01:34:15.972000 audit[2655]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:15.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634366130383133353862343534626663346564383930636366636265 Jan 28 01:34:15.972000 audit: BPF prog-id=87 op=LOAD Jan 28 01:34:15.972000 audit[2655]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2638 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:15.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634366130383133353862343534626663346564383930636366636265 Jan 28 01:34:16.701946 kubelet[2564]: I0128 01:34:16.517169 2564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:34:16.701946 kubelet[2564]: E0128 01:34:16.518596 2564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Jan 28 01:34:16.766000 audit: BPF prog-id=88 op=LOAD Jan 28 01:34:16.775000 audit: BPF prog-id=89 op=LOAD Jan 28 01:34:16.775000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00014a238 a2=98 a3=0 items=0 ppid=2611 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:16.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135366264363939323430336638393937336164613761643333353566 Jan 28 01:34:16.775000 audit: BPF prog-id=89 op=UNLOAD Jan 28 01:34:16.775000 audit[2672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:16.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135366264363939323430336638393937336164613761643333353566 Jan 28 01:34:16.777000 audit: BPF prog-id=90 op=LOAD Jan 28 01:34:16.788000 audit: BPF prog-id=91 op=LOAD Jan 28 01:34:16.788000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00014a488 a2=98 a3=0 items=0 ppid=2611 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:16.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135366264363939323430336638393937336164613761643333353566 Jan 28 01:34:16.788000 audit: BPF prog-id=92 op=LOAD Jan 28 01:34:16.788000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00014a218 a2=98 a3=0 items=0 ppid=2611 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:16.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135366264363939323430336638393937336164613761643333353566 Jan 28 01:34:16.788000 audit: BPF prog-id=92 op=UNLOAD Jan 28 01:34:16.788000 audit[2672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:16.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135366264363939323430336638393937336164613761643333353566 Jan 28 01:34:16.803000 audit: BPF prog-id=91 op=UNLOAD Jan 28 01:34:16.803000 audit[2672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:16.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135366264363939323430336638393937336164613761643333353566 Jan 28 01:34:16.803000 audit: BPF prog-id=93 op=LOAD Jan 28 01:34:16.803000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00014a6e8 a2=98 a3=0 items=0 ppid=2611 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:16.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135366264363939323430336638393937336164613761643333353566 Jan 28 01:34:16.993000 audit: BPF prog-id=94 op=LOAD Jan 28 01:34:16.993000 audit[2652]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2627 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:16.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434356239376665383632353836656137333433336639373231663738 Jan 28 01:34:17.004000 audit: BPF prog-id=94 op=UNLOAD Jan 28 01:34:17.004000 audit[2652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2627 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:17.004000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434356239376665383632353836656137333433336639373231663738 Jan 28 01:34:17.019000 audit: BPF prog-id=95 op=LOAD Jan 28 01:34:17.019000 audit[2652]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2627 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:17.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434356239376665383632353836656137333433336639373231663738 Jan 28 01:34:17.019000 audit: BPF prog-id=96 op=LOAD Jan 28 01:34:17.019000 audit[2652]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2627 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:17.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434356239376665383632353836656137333433336639373231663738 Jan 28 01:34:17.034000 audit: BPF prog-id=96 op=UNLOAD Jan 28 01:34:17.034000 audit[2652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2627 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:17.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434356239376665383632353836656137333433336639373231663738 Jan 28 01:34:17.034000 audit: BPF prog-id=95 op=UNLOAD Jan 28 01:34:17.034000 audit[2652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2627 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:17.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434356239376665383632353836656137333433336639373231663738 Jan 28 01:34:17.039000 audit: BPF prog-id=97 op=LOAD Jan 28 01:34:17.039000 audit[2652]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2627 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:17.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434356239376665383632353836656137333433336639373231663738 Jan 28 01:34:17.423752 containerd[1612]: time="2026-01-28T01:34:17.423504113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a56bd6992403f89973ada7ad3355f99bf77cb3f3034dd5fa7cff9dcc493606df\"" Jan 28 01:34:17.435690 kubelet[2564]: E0128 01:34:17.435650 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:17.505942 containerd[1612]: time="2026-01-28T01:34:17.505126858Z" level=info msg="CreateContainer within sandbox \"a56bd6992403f89973ada7ad3355f99bf77cb3f3034dd5fa7cff9dcc493606df\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 01:34:17.535949 containerd[1612]: time="2026-01-28T01:34:17.533511085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"646a081358b454bfc4ed890ccfcbe5ac5ad7f82c77034df4b4ca181abd77a0c1\"" Jan 28 01:34:17.537451 kubelet[2564]: E0128 01:34:17.537409 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:17.559145 containerd[1612]: time="2026-01-28T01:34:17.559078670Z" level=info msg="CreateContainer within sandbox \"646a081358b454bfc4ed890ccfcbe5ac5ad7f82c77034df4b4ca181abd77a0c1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 01:34:17.623654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1639249081.mount: Deactivated successfully. Jan 28 01:34:17.662166 containerd[1612]: time="2026-01-28T01:34:17.660009567Z" level=info msg="Container 7818e0f0981fabe3c589b306b9f44d735eba4dee07167d9aadb3ef62f716a2c6: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:34:17.685945 containerd[1612]: time="2026-01-28T01:34:17.684086596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b6edb375440e4b291c7482c8ceb5f2e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"445b97fe862586ea73433f9721f78906e725171e7b89602fa2c525dea695a384\"" Jan 28 01:34:17.686064 kubelet[2564]: E0128 01:34:17.685774 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:17.697167 containerd[1612]: time="2026-01-28T01:34:17.695682574Z" level=info msg="Container bca85ac4f8f009994a35d6e89f6ce4ac782c6cec4fd3bb8329adfd0bbf299d83: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:34:17.711555 containerd[1612]: time="2026-01-28T01:34:17.710686414Z" level=info msg="CreateContainer within sandbox \"445b97fe862586ea73433f9721f78906e725171e7b89602fa2c525dea695a384\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 01:34:17.743191 containerd[1612]: time="2026-01-28T01:34:17.740609326Z" level=info msg="CreateContainer within sandbox \"a56bd6992403f89973ada7ad3355f99bf77cb3f3034dd5fa7cff9dcc493606df\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7818e0f0981fabe3c589b306b9f44d735eba4dee07167d9aadb3ef62f716a2c6\"" Jan 28 01:34:17.751615 containerd[1612]: time="2026-01-28T01:34:17.750402348Z" level=info msg="StartContainer for \"7818e0f0981fabe3c589b306b9f44d735eba4dee07167d9aadb3ef62f716a2c6\"" Jan 28 01:34:17.783113 containerd[1612]: time="2026-01-28T01:34:17.770943300Z" level=info msg="connecting to shim 7818e0f0981fabe3c589b306b9f44d735eba4dee07167d9aadb3ef62f716a2c6" address="unix:///run/containerd/s/0cf8acde16ffb195c6f80d9b444f4d007e850459187eab2f54d297c8d217061b" protocol=ttrpc version=3 Jan 28 01:34:17.803719 kubelet[2564]: E0128 01:34:17.773071 2564 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.88:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:34:17.804040 containerd[1612]: time="2026-01-28T01:34:17.802713475Z" level=info msg="CreateContainer within sandbox \"646a081358b454bfc4ed890ccfcbe5ac5ad7f82c77034df4b4ca181abd77a0c1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bca85ac4f8f009994a35d6e89f6ce4ac782c6cec4fd3bb8329adfd0bbf299d83\"" Jan 28 01:34:17.995150 containerd[1612]: time="2026-01-28T01:34:17.989421769Z" level=info msg="StartContainer for \"bca85ac4f8f009994a35d6e89f6ce4ac782c6cec4fd3bb8329adfd0bbf299d83\"" Jan 28 01:34:18.005204 containerd[1612]: time="2026-01-28T01:34:18.001081407Z" level=info msg="connecting to shim bca85ac4f8f009994a35d6e89f6ce4ac782c6cec4fd3bb8329adfd0bbf299d83" address="unix:///run/containerd/s/9f1abb83d31b9d22e8e0e3b7999dce2fc512d5c9d1c633bf2f7c52ea2b0b2d95" protocol=ttrpc version=3 Jan 28 01:34:18.095745 containerd[1612]: time="2026-01-28T01:34:18.095168558Z" level=info msg="Container 7c0919d9526146648a8e41a1693364d2f00f55c67b4c09e553e1095e49ea45b9: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:34:18.129001 kubelet[2564]: E0128 01:34:18.125363 2564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="6.4s" Jan 28 01:34:18.315645 kubelet[2564]: W0128 01:34:18.312029 2564 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 28 01:34:18.315645 kubelet[2564]: E0128 01:34:18.312660 2564 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:34:18.330146 kubelet[2564]: W0128 01:34:18.329687 2564 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 28 01:34:18.331221 kubelet[2564]: E0128 01:34:18.330932 2564 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:34:18.361950 containerd[1612]: time="2026-01-28T01:34:18.361891066Z" level=info msg="CreateContainer within sandbox \"445b97fe862586ea73433f9721f78906e725171e7b89602fa2c525dea695a384\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7c0919d9526146648a8e41a1693364d2f00f55c67b4c09e553e1095e49ea45b9\"" Jan 28 01:34:18.377942 containerd[1612]: time="2026-01-28T01:34:18.375480602Z" level=info msg="StartContainer for \"7c0919d9526146648a8e41a1693364d2f00f55c67b4c09e553e1095e49ea45b9\"" Jan 28 01:34:18.406000 containerd[1612]: time="2026-01-28T01:34:18.404754520Z" level=info msg="connecting to shim 7c0919d9526146648a8e41a1693364d2f00f55c67b4c09e553e1095e49ea45b9" address="unix:///run/containerd/s/392b35e01d38e94f7fb9c323d815dc51ba765fbf7e43bf6e8a30aeb2572bdd05" protocol=ttrpc version=3 Jan 28 01:34:18.431389 systemd[1]: Started cri-containerd-bca85ac4f8f009994a35d6e89f6ce4ac782c6cec4fd3bb8329adfd0bbf299d83.scope - libcontainer container bca85ac4f8f009994a35d6e89f6ce4ac782c6cec4fd3bb8329adfd0bbf299d83. Jan 28 01:34:18.632382 systemd[1]: Started cri-containerd-7818e0f0981fabe3c589b306b9f44d735eba4dee07167d9aadb3ef62f716a2c6.scope - libcontainer container 7818e0f0981fabe3c589b306b9f44d735eba4dee07167d9aadb3ef62f716a2c6. Jan 28 01:34:18.825000 audit: BPF prog-id=98 op=LOAD Jan 28 01:34:18.829000 audit: BPF prog-id=99 op=LOAD Jan 28 01:34:18.829000 audit[2741]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2638 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:18.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263613835616334663866303039393934613335643665383966366365 Jan 28 01:34:18.829000 audit: BPF prog-id=99 op=UNLOAD Jan 28 01:34:18.829000 audit[2741]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:18.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263613835616334663866303039393934613335643665383966366365 Jan 28 01:34:18.829000 audit: BPF prog-id=100 op=LOAD Jan 28 01:34:18.829000 audit[2741]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2638 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:18.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263613835616334663866303039393934613335643665383966366365 Jan 28 01:34:18.829000 audit: BPF prog-id=101 op=LOAD Jan 28 01:34:18.829000 audit[2741]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2638 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:18.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263613835616334663866303039393934613335643665383966366365 Jan 28 01:34:18.829000 audit: BPF prog-id=101 op=UNLOAD Jan 28 01:34:18.829000 audit[2741]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:18.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263613835616334663866303039393934613335643665383966366365 Jan 28 01:34:18.850000 audit: BPF prog-id=100 op=UNLOAD Jan 28 01:34:18.850000 audit[2741]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:18.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263613835616334663866303039393934613335643665383966366365 Jan 28 01:34:18.850000 audit: BPF prog-id=102 op=LOAD Jan 28 01:34:18.850000 audit[2741]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2638 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:18.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263613835616334663866303039393934613335643665383966366365 Jan 28 01:34:18.867441 systemd[1]: Started cri-containerd-7c0919d9526146648a8e41a1693364d2f00f55c67b4c09e553e1095e49ea45b9.scope - libcontainer container 7c0919d9526146648a8e41a1693364d2f00f55c67b4c09e553e1095e49ea45b9. Jan 28 01:34:18.914000 audit: BPF prog-id=103 op=LOAD Jan 28 01:34:18.914000 audit: BPF prog-id=104 op=LOAD Jan 28 01:34:18.914000 audit[2740]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2611 pid=2740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:18.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738313865306630393831666162653363353839623330366239663434 Jan 28 01:34:18.914000 audit: BPF prog-id=104 op=UNLOAD Jan 28 01:34:18.914000 audit[2740]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=2740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:18.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738313865306630393831666162653363353839623330366239663434 Jan 28 01:34:18.914000 audit: BPF prog-id=105 op=LOAD Jan 28 01:34:18.914000 audit[2740]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2611 pid=2740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:18.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738313865306630393831666162653363353839623330366239663434 Jan 28 01:34:18.914000 audit: BPF prog-id=106 op=LOAD Jan 28 01:34:18.914000 audit[2740]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2611 pid=2740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:18.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738313865306630393831666162653363353839623330366239663434 Jan 28 01:34:18.914000 audit: BPF prog-id=106 op=UNLOAD Jan 28 01:34:18.914000 audit[2740]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=2740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:18.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738313865306630393831666162653363353839623330366239663434 Jan 28 01:34:18.914000 audit: BPF prog-id=105 op=UNLOAD Jan 28 01:34:18.914000 audit[2740]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=2740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:18.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738313865306630393831666162653363353839623330366239663434 Jan 28 01:34:18.914000 audit: BPF prog-id=107 op=LOAD Jan 28 01:34:18.914000 audit[2740]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2611 pid=2740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:18.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738313865306630393831666162653363353839623330366239663434 Jan 28 01:34:19.030000 audit: BPF prog-id=108 op=LOAD Jan 28 01:34:19.056000 audit: BPF prog-id=109 op=LOAD Jan 28 01:34:19.056000 audit[2765]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2627 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:19.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763303931396439353236313436363438613865343161313639333336 Jan 28 01:34:19.056000 audit: BPF prog-id=109 op=UNLOAD Jan 28 01:34:19.056000 audit[2765]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2627 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:19.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763303931396439353236313436363438613865343161313639333336 Jan 28 01:34:19.056000 audit: BPF prog-id=110 op=LOAD Jan 28 01:34:19.056000 audit[2765]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2627 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:19.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763303931396439353236313436363438613865343161313639333336 Jan 28 01:34:19.061000 audit: BPF prog-id=111 op=LOAD Jan 28 01:34:19.061000 audit[2765]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2627 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:19.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763303931396439353236313436363438613865343161313639333336 Jan 28 01:34:19.061000 audit: BPF prog-id=111 op=UNLOAD Jan 28 01:34:19.061000 audit[2765]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2627 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:19.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763303931396439353236313436363438613865343161313639333336 Jan 28 01:34:19.061000 audit: BPF prog-id=110 op=UNLOAD Jan 28 01:34:19.061000 audit[2765]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2627 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:19.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763303931396439353236313436363438613865343161313639333336 Jan 28 01:34:19.061000 audit: BPF prog-id=112 op=LOAD Jan 28 01:34:19.061000 audit[2765]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2627 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:19.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763303931396439353236313436363438613865343161313639333336 Jan 28 01:34:19.337734 kubelet[2564]: W0128 01:34:19.328413 2564 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 28 01:34:19.337734 kubelet[2564]: E0128 01:34:19.328519 2564 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:34:19.916361 kubelet[2564]: I0128 01:34:19.900009 2564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:34:19.916361 kubelet[2564]: E0128 01:34:19.915982 2564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Jan 28 01:34:19.935496 containerd[1612]: time="2026-01-28T01:34:19.935361450Z" level=info msg="StartContainer for \"bca85ac4f8f009994a35d6e89f6ce4ac782c6cec4fd3bb8329adfd0bbf299d83\" returns successfully" Jan 28 01:34:19.940603 containerd[1612]: time="2026-01-28T01:34:19.940415149Z" level=info msg="StartContainer for \"7818e0f0981fabe3c589b306b9f44d735eba4dee07167d9aadb3ef62f716a2c6\" returns successfully" Jan 28 01:34:20.025359 containerd[1612]: time="2026-01-28T01:34:20.024135472Z" level=info msg="StartContainer for \"7c0919d9526146648a8e41a1693364d2f00f55c67b4c09e553e1095e49ea45b9\" returns successfully" Jan 28 01:34:20.709222 kubelet[2564]: E0128 01:34:20.709125 2564 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:34:20.711983 kubelet[2564]: E0128 01:34:20.709748 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:20.746029 kubelet[2564]: W0128 01:34:20.735376 2564 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 28 01:34:20.746029 kubelet[2564]: E0128 01:34:20.735493 2564 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:34:20.828475 kubelet[2564]: E0128 01:34:20.816415 2564 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:34:20.828475 kubelet[2564]: E0128 01:34:20.816685 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:20.839202 kubelet[2564]: E0128 01:34:20.838121 2564 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:34:20.839202 kubelet[2564]: E0128 01:34:20.838515 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:22.580702 kubelet[2564]: E0128 01:34:22.579459 2564 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:34:24.023495 kubelet[2564]: E0128 01:34:24.018125 2564 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:34:24.189377 kubelet[2564]: E0128 01:34:24.185502 2564 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:34:24.206095 kubelet[2564]: E0128 01:34:24.205721 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:24.206095 kubelet[2564]: E0128 01:34:24.205766 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:24.209364 kubelet[2564]: E0128 01:34:24.208995 2564 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:34:24.215899 kubelet[2564]: E0128 01:34:24.215633 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:24.526348 kubelet[2564]: E0128 01:34:24.525371 2564 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:34:24.526348 kubelet[2564]: E0128 01:34:24.525585 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:24.526348 kubelet[2564]: E0128 01:34:24.526016 2564 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:34:24.526348 kubelet[2564]: E0128 01:34:24.526165 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:24.585602 kubelet[2564]: E0128 01:34:24.580695 2564 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:34:24.585602 kubelet[2564]: E0128 01:34:24.583800 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:26.400552 kubelet[2564]: I0128 01:34:26.394689 2564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:34:32.584716 kubelet[2564]: E0128 01:34:32.581573 2564 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:34:34.109072 kubelet[2564]: E0128 01:34:34.105851 2564 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:34:34.109072 kubelet[2564]: E0128 01:34:34.106163 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:34.577438 kubelet[2564]: E0128 01:34:34.574183 2564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Jan 28 01:34:34.944676 kubelet[2564]: W0128 01:34:34.895879 2564 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 28 01:34:34.944676 kubelet[2564]: E0128 01:34:34.896616 2564 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 28 01:34:35.875637 kubelet[2564]: E0128 01:34:35.860473 2564 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.88:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188ec1211a99011f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:34:11.527500063 +0000 UTC m=+4.102778803,LastTimestamp:2026-01-28 01:34:11.527500063 +0000 UTC m=+4.102778803,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:34:36.178059 kubelet[2564]: W0128 01:34:36.177475 2564 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 28 01:34:36.178059 kubelet[2564]: E0128 01:34:36.178003 2564 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 28 01:34:36.186437 kubelet[2564]: E0128 01:34:36.182204 2564 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.88:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 28 01:34:36.452229 kubelet[2564]: E0128 01:34:36.431456 2564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 28 01:34:39.466739 kubelet[2564]: W0128 01:34:39.461415 2564 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 28 01:34:39.466739 kubelet[2564]: E0128 01:34:39.461790 2564 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 28 01:34:42.097801 kubelet[2564]: W0128 01:34:42.068590 2564 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 28 01:34:42.299199 kubelet[2564]: E0128 01:34:42.296135 2564 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 28 01:34:42.612546 kubelet[2564]: E0128 01:34:42.611997 2564 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:34:43.559918 kubelet[2564]: I0128 01:34:43.544902 2564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:34:46.655072 kubelet[2564]: E0128 01:34:46.594795 2564 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 28 01:34:46.816153 kubelet[2564]: I0128 01:34:46.813734 2564 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 01:34:46.816153 kubelet[2564]: E0128 01:34:46.813786 2564 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 28 01:34:46.816153 kubelet[2564]: E0128 01:34:46.820572 2564 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188ec1211a99011f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:34:11.527500063 +0000 UTC m=+4.102778803,LastTimestamp:2026-01-28 01:34:11.527500063 +0000 UTC m=+4.102778803,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:34:46.873116 kubelet[2564]: E0128 01:34:46.873077 2564 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:34:46.903132 kubelet[2564]: E0128 01:34:46.895717 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:47.016098 kubelet[2564]: E0128 01:34:47.015647 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:47.155450 kubelet[2564]: E0128 01:34:47.153935 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:47.255772 kubelet[2564]: E0128 01:34:47.254797 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:47.357777 kubelet[2564]: E0128 01:34:47.355146 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:47.457634 kubelet[2564]: E0128 01:34:47.457582 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:47.576596 kubelet[2564]: E0128 01:34:47.567523 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:47.671298 kubelet[2564]: E0128 01:34:47.668407 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:47.770814 kubelet[2564]: E0128 01:34:47.770423 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:47.877071 kubelet[2564]: E0128 01:34:47.874515 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:48.049086 kubelet[2564]: E0128 01:34:47.992418 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:48.110394 kubelet[2564]: E0128 01:34:48.110131 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:48.261143 kubelet[2564]: E0128 01:34:48.256443 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:48.363525 kubelet[2564]: E0128 01:34:48.359454 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:48.461459 kubelet[2564]: E0128 01:34:48.460671 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:48.563026 kubelet[2564]: E0128 01:34:48.562659 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:48.689231 kubelet[2564]: E0128 01:34:48.681471 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:48.792941 kubelet[2564]: E0128 01:34:48.787583 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:48.920501 kubelet[2564]: E0128 01:34:48.896775 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:49.036603 kubelet[2564]: E0128 01:34:49.023251 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:49.213830 kubelet[2564]: E0128 01:34:49.165439 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:49.275895 kubelet[2564]: E0128 01:34:49.275826 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:49.397758 kubelet[2564]: E0128 01:34:49.376477 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:49.479478 kubelet[2564]: E0128 01:34:49.479421 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:49.582846 kubelet[2564]: E0128 01:34:49.581130 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:49.695547 kubelet[2564]: E0128 01:34:49.692038 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:49.811171 kubelet[2564]: E0128 01:34:49.808762 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:49.909607 kubelet[2564]: E0128 01:34:49.909565 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:50.018634 kubelet[2564]: E0128 01:34:50.017839 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:50.126471 kubelet[2564]: E0128 01:34:50.126369 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:50.406551 kubelet[2564]: E0128 01:34:50.280704 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:50.406551 kubelet[2564]: E0128 01:34:50.385934 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:50.490379 kubelet[2564]: E0128 01:34:50.489703 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:50.591582 kubelet[2564]: E0128 01:34:50.591473 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:50.752162 kubelet[2564]: E0128 01:34:50.700380 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:50.811066 kubelet[2564]: E0128 01:34:50.808814 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:50.919245 kubelet[2564]: E0128 01:34:50.918766 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:51.044475 kubelet[2564]: E0128 01:34:51.019803 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:51.122184 kubelet[2564]: E0128 01:34:51.121482 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:51.232514 kubelet[2564]: E0128 01:34:51.232077 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:51.371787 kubelet[2564]: E0128 01:34:51.351064 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:51.496522 kubelet[2564]: E0128 01:34:51.492111 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:51.606516 kubelet[2564]: E0128 01:34:51.606453 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:51.707378 kubelet[2564]: E0128 01:34:51.707211 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:51.811115 kubelet[2564]: E0128 01:34:51.810232 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:51.913377 kubelet[2564]: E0128 01:34:51.911584 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:52.069443 kubelet[2564]: E0128 01:34:52.059552 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:52.168064 kubelet[2564]: E0128 01:34:52.167867 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:52.272598 kubelet[2564]: E0128 01:34:52.272533 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:52.377974 kubelet[2564]: E0128 01:34:52.375449 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:52.502839 kubelet[2564]: E0128 01:34:52.497909 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:52.622659 kubelet[2564]: E0128 01:34:52.604502 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:52.648913 kubelet[2564]: E0128 01:34:52.636628 2564 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:34:52.705166 kubelet[2564]: E0128 01:34:52.705118 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:52.975922 kubelet[2564]: E0128 01:34:52.933770 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:53.053901 kubelet[2564]: E0128 01:34:53.053230 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:53.167169 kubelet[2564]: E0128 01:34:53.165980 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:53.273181 kubelet[2564]: E0128 01:34:53.269218 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:53.401831 kubelet[2564]: E0128 01:34:53.396076 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:53.501442 kubelet[2564]: E0128 01:34:53.501197 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:53.620910 kubelet[2564]: E0128 01:34:53.611451 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:53.731414 kubelet[2564]: E0128 01:34:53.729152 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:53.914542 kubelet[2564]: E0128 01:34:53.862838 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:54.016865 kubelet[2564]: E0128 01:34:54.015415 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:54.116129 kubelet[2564]: E0128 01:34:54.115980 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:54.233211 kubelet[2564]: E0128 01:34:54.225731 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:54.418136 kubelet[2564]: E0128 01:34:54.408410 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:54.514556 kubelet[2564]: E0128 01:34:54.512467 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:54.660367 kubelet[2564]: E0128 01:34:54.648943 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:54.871659 kubelet[2564]: E0128 01:34:54.855204 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:54.975442 kubelet[2564]: E0128 01:34:54.974799 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:55.164826 kubelet[2564]: E0128 01:34:55.160790 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:55.268577 kubelet[2564]: E0128 01:34:55.268488 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:55.396084 kubelet[2564]: E0128 01:34:55.390775 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:55.545693 kubelet[2564]: E0128 01:34:55.500382 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:55.601989 kubelet[2564]: E0128 01:34:55.601923 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:55.704909 kubelet[2564]: E0128 01:34:55.704842 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:55.919753 kubelet[2564]: E0128 01:34:55.847842 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:56.116164 kubelet[2564]: E0128 01:34:56.110186 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:56.263736 kubelet[2564]: E0128 01:34:56.225217 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:56.328243 kubelet[2564]: E0128 01:34:56.328180 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:56.428684 kubelet[2564]: E0128 01:34:56.428616 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:56.549759 kubelet[2564]: E0128 01:34:56.531954 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:56.663931 kubelet[2564]: E0128 01:34:56.659901 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:56.762139 kubelet[2564]: E0128 01:34:56.761577 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:56.870745 kubelet[2564]: E0128 01:34:56.864478 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:56.966916 kubelet[2564]: E0128 01:34:56.966847 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:57.069749 kubelet[2564]: E0128 01:34:57.069651 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:57.134517 kubelet[2564]: E0128 01:34:57.131094 2564 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 28 01:34:57.228176 kubelet[2564]: E0128 01:34:57.228114 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:57.331957 kubelet[2564]: E0128 01:34:57.331883 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:57.435217 kubelet[2564]: E0128 01:34:57.433529 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:57.536483 kubelet[2564]: E0128 01:34:57.536402 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:57.639862 kubelet[2564]: E0128 01:34:57.637213 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:57.742974 kubelet[2564]: E0128 01:34:57.741615 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:57.846616 kubelet[2564]: E0128 01:34:57.842866 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:57.946452 kubelet[2564]: E0128 01:34:57.946217 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:58.057510 kubelet[2564]: E0128 01:34:58.057071 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:58.158183 kubelet[2564]: E0128 01:34:58.158114 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:58.258866 kubelet[2564]: E0128 01:34:58.258489 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:58.365598 kubelet[2564]: E0128 01:34:58.361140 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:58.472461 kubelet[2564]: E0128 01:34:58.468235 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:58.591379 kubelet[2564]: E0128 01:34:58.580985 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:59.037693 kubelet[2564]: E0128 01:34:58.810762 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:59.037693 kubelet[2564]: E0128 01:34:59.029166 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:59.233777 kubelet[2564]: E0128 01:34:59.230908 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:34:59.269477 kubelet[2564]: I0128 01:34:59.269433 2564 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 01:34:59.378739 kubelet[2564]: I0128 01:34:59.334600 2564 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 01:34:59.601377 kubelet[2564]: E0128 01:34:59.600652 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:59.605944 kubelet[2564]: I0128 01:34:59.602170 2564 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 01:34:59.724818 kubelet[2564]: I0128 01:34:59.723910 2564 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 01:34:59.809580 kubelet[2564]: E0128 01:34:59.808839 2564 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 28 01:35:00.234566 kubelet[2564]: I0128 01:35:00.234511 2564 apiserver.go:52] "Watching apiserver" Jan 28 01:35:00.265468 kubelet[2564]: E0128 01:35:00.265125 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:00.269691 kubelet[2564]: E0128 01:35:00.268504 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:00.269691 kubelet[2564]: I0128 01:35:00.268648 2564 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 01:35:00.295743 kubelet[2564]: E0128 01:35:00.292874 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:00.997525 systemd[1]: Reload requested from client PID 2847 ('systemctl') (unit session-8.scope)... Jan 28 01:35:00.997606 systemd[1]: Reloading... Jan 28 01:35:02.205152 zram_generator::config[2902]: No configuration found. Jan 28 01:35:02.420699 kubelet[2564]: I0128 01:35:02.420458 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.42037752 podStartE2EDuration="3.42037752s" podCreationTimestamp="2026-01-28 01:34:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:35:02.418593991 +0000 UTC m=+54.993872741" watchObservedRunningTime="2026-01-28 01:35:02.42037752 +0000 UTC m=+54.995656260" Jan 28 01:35:02.719392 kubelet[2564]: I0128 01:35:02.718911 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.718889508 podStartE2EDuration="3.718889508s" podCreationTimestamp="2026-01-28 01:34:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:35:02.57012253 +0000 UTC m=+55.145401279" watchObservedRunningTime="2026-01-28 01:35:02.718889508 +0000 UTC m=+55.294168238" Jan 28 01:35:02.719392 kubelet[2564]: I0128 01:35:02.719159 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.7191429190000003 podStartE2EDuration="3.719142919s" podCreationTimestamp="2026-01-28 01:34:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:35:02.718779502 +0000 UTC m=+55.294058242" watchObservedRunningTime="2026-01-28 01:35:02.719142919 +0000 UTC m=+55.294421680" Jan 28 01:35:03.862228 kubelet[2564]: E0128 01:35:03.851199 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:04.529435 kubelet[2564]: E0128 01:35:04.517154 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:04.607866 systemd[1]: Reloading finished in 3595 ms. Jan 28 01:35:04.825950 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:35:04.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:04.909603 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 01:35:04.910021 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:35:04.910224 systemd[1]: kubelet.service: Consumed 8.558s CPU time, 137.7M memory peak. Jan 28 01:35:04.932838 kernel: kauditd_printk_skb: 122 callbacks suppressed Jan 28 01:35:04.932987 kernel: audit: type=1131 audit(1769564104.908:412): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:04.955762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:35:05.089000 audit: BPF prog-id=113 op=LOAD Jan 28 01:35:05.099000 audit: BPF prog-id=69 op=UNLOAD Jan 28 01:35:05.139791 kernel: audit: type=1334 audit(1769564105.089:413): prog-id=113 op=LOAD Jan 28 01:35:05.139933 kernel: audit: type=1334 audit(1769564105.099:414): prog-id=69 op=UNLOAD Jan 28 01:35:05.139972 kernel: audit: type=1334 audit(1769564105.110:415): prog-id=114 op=LOAD Jan 28 01:35:05.110000 audit: BPF prog-id=114 op=LOAD Jan 28 01:35:05.110000 audit: BPF prog-id=70 op=UNLOAD Jan 28 01:35:05.162483 kernel: audit: type=1334 audit(1769564105.110:416): prog-id=70 op=UNLOAD Jan 28 01:35:05.114000 audit: BPF prog-id=115 op=LOAD Jan 28 01:35:05.185020 kernel: audit: type=1334 audit(1769564105.114:417): prog-id=115 op=LOAD Jan 28 01:35:05.225436 kernel: audit: type=1334 audit(1769564105.114:418): prog-id=79 op=UNLOAD Jan 28 01:35:05.114000 audit: BPF prog-id=79 op=UNLOAD Jan 28 01:35:05.233396 kernel: audit: type=1334 audit(1769564105.114:419): prog-id=116 op=LOAD Jan 28 01:35:05.114000 audit: BPF prog-id=116 op=LOAD Jan 28 01:35:05.120000 audit: BPF prog-id=117 op=LOAD Jan 28 01:35:05.120000 audit: BPF prog-id=80 op=UNLOAD Jan 28 01:35:05.319568 kernel: audit: type=1334 audit(1769564105.120:420): prog-id=117 op=LOAD Jan 28 01:35:05.319742 kernel: audit: type=1334 audit(1769564105.120:421): prog-id=80 op=UNLOAD Jan 28 01:35:05.120000 audit: BPF prog-id=81 op=UNLOAD Jan 28 01:35:05.128000 audit: BPF prog-id=118 op=LOAD Jan 28 01:35:05.128000 audit: BPF prog-id=63 op=UNLOAD Jan 28 01:35:05.128000 audit: BPF prog-id=119 op=LOAD Jan 28 01:35:05.128000 audit: BPF prog-id=120 op=LOAD Jan 28 01:35:05.128000 audit: BPF prog-id=64 op=UNLOAD Jan 28 01:35:05.139000 audit: BPF prog-id=65 op=UNLOAD Jan 28 01:35:05.141000 audit: BPF prog-id=121 op=LOAD Jan 28 01:35:05.141000 audit: BPF prog-id=66 op=UNLOAD Jan 28 01:35:05.150000 audit: BPF prog-id=122 op=LOAD Jan 28 01:35:05.150000 audit: BPF prog-id=123 op=LOAD Jan 28 01:35:05.150000 audit: BPF prog-id=67 op=UNLOAD Jan 28 01:35:05.150000 audit: BPF prog-id=68 op=UNLOAD Jan 28 01:35:05.176000 audit: BPF prog-id=124 op=LOAD Jan 28 01:35:05.176000 audit: BPF prog-id=71 op=UNLOAD Jan 28 01:35:05.176000 audit: BPF prog-id=125 op=LOAD Jan 28 01:35:05.181000 audit: BPF prog-id=126 op=LOAD Jan 28 01:35:05.181000 audit: BPF prog-id=72 op=UNLOAD Jan 28 01:35:05.181000 audit: BPF prog-id=73 op=UNLOAD Jan 28 01:35:05.185000 audit: BPF prog-id=127 op=LOAD Jan 28 01:35:05.191000 audit: BPF prog-id=128 op=LOAD Jan 28 01:35:05.191000 audit: BPF prog-id=77 op=UNLOAD Jan 28 01:35:05.191000 audit: BPF prog-id=78 op=UNLOAD Jan 28 01:35:05.215000 audit: BPF prog-id=129 op=LOAD Jan 28 01:35:05.215000 audit: BPF prog-id=74 op=UNLOAD Jan 28 01:35:05.215000 audit: BPF prog-id=130 op=LOAD Jan 28 01:35:05.215000 audit: BPF prog-id=131 op=LOAD Jan 28 01:35:05.215000 audit: BPF prog-id=75 op=UNLOAD Jan 28 01:35:05.215000 audit: BPF prog-id=76 op=UNLOAD Jan 28 01:35:05.222000 audit: BPF prog-id=132 op=LOAD Jan 28 01:35:05.222000 audit: BPF prog-id=82 op=UNLOAD Jan 28 01:35:09.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:09.014763 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:35:09.128952 (kubelet)[2938]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:35:09.955032 kubelet[2938]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:35:09.955032 kubelet[2938]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:35:09.955032 kubelet[2938]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:35:09.955032 kubelet[2938]: I0128 01:35:09.955694 2938 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:35:10.459835 kubelet[2938]: I0128 01:35:10.458749 2938 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 01:35:10.459835 kubelet[2938]: I0128 01:35:10.459168 2938 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:35:10.459835 kubelet[2938]: I0128 01:35:10.461878 2938 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 01:35:10.459835 kubelet[2938]: I0128 01:35:10.485251 2938 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 01:35:10.518422 kubelet[2938]: I0128 01:35:10.517531 2938 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:35:10.693067 kubelet[2938]: I0128 01:35:10.692232 2938 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 01:35:10.825394 kubelet[2938]: I0128 01:35:10.817938 2938 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 01:35:10.849834 kubelet[2938]: I0128 01:35:10.849514 2938 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:35:10.850354 kubelet[2938]: I0128 01:35:10.849653 2938 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 01:35:10.850596 kubelet[2938]: I0128 01:35:10.850402 2938 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:35:10.850596 kubelet[2938]: I0128 01:35:10.850423 2938 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 01:35:10.850596 kubelet[2938]: I0128 01:35:10.850520 2938 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:35:10.850894 kubelet[2938]: I0128 01:35:10.850791 2938 kubelet.go:446] "Attempting to sync node with API server" Jan 28 01:35:10.850973 kubelet[2938]: I0128 01:35:10.850908 2938 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:35:10.857030 kubelet[2938]: I0128 01:35:10.851070 2938 kubelet.go:352] "Adding apiserver pod source" Jan 28 01:35:10.857030 kubelet[2938]: I0128 01:35:10.851212 2938 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:35:10.867845 kubelet[2938]: I0128 01:35:10.864507 2938 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 28 01:35:10.867845 kubelet[2938]: I0128 01:35:10.867480 2938 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 01:35:10.868681 kubelet[2938]: I0128 01:35:10.868378 2938 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 01:35:10.868681 kubelet[2938]: I0128 01:35:10.868469 2938 server.go:1287] "Started kubelet" Jan 28 01:35:10.873380 kubelet[2938]: I0128 01:35:10.872393 2938 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:35:10.873380 kubelet[2938]: I0128 01:35:10.872825 2938 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:35:10.873380 kubelet[2938]: I0128 01:35:10.872923 2938 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:35:10.896438 kubelet[2938]: I0128 01:35:10.890656 2938 server.go:479] "Adding debug handlers to kubelet server" Jan 28 01:35:11.149065 kubelet[2938]: I0128 01:35:11.146511 2938 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:35:11.196909 kubelet[2938]: I0128 01:35:11.165716 2938 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:35:11.196909 kubelet[2938]: I0128 01:35:11.168978 2938 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 01:35:11.196909 kubelet[2938]: I0128 01:35:11.169197 2938 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 01:35:11.196909 kubelet[2938]: I0128 01:35:11.169518 2938 reconciler.go:26] "Reconciler: start to sync state" Jan 28 01:35:11.196909 kubelet[2938]: E0128 01:35:11.174745 2938 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:35:11.196909 kubelet[2938]: E0128 01:35:11.192225 2938 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:35:11.221848 kubelet[2938]: I0128 01:35:11.209412 2938 factory.go:221] Registration of the systemd container factory successfully Jan 28 01:35:11.221848 kubelet[2938]: I0128 01:35:11.209626 2938 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:35:11.234001 kubelet[2938]: I0128 01:35:11.233684 2938 factory.go:221] Registration of the containerd container factory successfully Jan 28 01:35:11.330551 kubelet[2938]: I0128 01:35:11.328708 2938 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 01:35:11.373032 kubelet[2938]: I0128 01:35:11.367223 2938 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 01:35:11.373032 kubelet[2938]: I0128 01:35:11.367438 2938 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 01:35:11.373032 kubelet[2938]: I0128 01:35:11.367467 2938 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:35:11.373032 kubelet[2938]: I0128 01:35:11.367477 2938 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 01:35:11.373032 kubelet[2938]: E0128 01:35:11.367619 2938 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:35:11.469896 kubelet[2938]: E0128 01:35:11.469609 2938 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 01:35:11.675823 kubelet[2938]: E0128 01:35:11.675522 2938 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 01:35:11.681655 kubelet[2938]: I0128 01:35:11.679074 2938 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:35:11.681655 kubelet[2938]: I0128 01:35:11.679175 2938 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:35:11.681655 kubelet[2938]: I0128 01:35:11.679442 2938 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:35:11.681655 kubelet[2938]: I0128 01:35:11.679677 2938 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 01:35:11.681655 kubelet[2938]: I0128 01:35:11.679692 2938 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 01:35:11.681655 kubelet[2938]: I0128 01:35:11.679717 2938 policy_none.go:49] "None policy: Start" Jan 28 01:35:11.681655 kubelet[2938]: I0128 01:35:11.679733 2938 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 01:35:11.681655 kubelet[2938]: I0128 01:35:11.679751 2938 state_mem.go:35] "Initializing new in-memory state store" Jan 28 01:35:11.681655 kubelet[2938]: I0128 01:35:11.680035 2938 state_mem.go:75] "Updated machine memory state" Jan 28 01:35:11.721595 kubelet[2938]: I0128 01:35:11.720453 2938 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 01:35:11.721595 kubelet[2938]: I0128 01:35:11.720794 2938 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:35:11.721595 kubelet[2938]: I0128 01:35:11.720816 2938 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:35:11.725559 kubelet[2938]: I0128 01:35:11.721948 2938 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:35:11.734547 kubelet[2938]: E0128 01:35:11.733462 2938 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:35:11.734547 kubelet[2938]: I0128 01:35:11.733826 2938 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 01:35:11.736037 containerd[1612]: time="2026-01-28T01:35:11.735882965Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 01:35:11.736618 kubelet[2938]: I0128 01:35:11.736384 2938 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 01:35:11.886668 kubelet[2938]: I0128 01:35:11.885497 2938 apiserver.go:52] "Watching apiserver" Jan 28 01:35:12.090486 kubelet[2938]: I0128 01:35:12.088941 2938 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:35:12.168382 systemd[1]: Created slice kubepods-besteffort-pod9305c271_45c2_41ee_99c9_5078fd0e50fa.slice - libcontainer container kubepods-besteffort-pod9305c271_45c2_41ee_99c9_5078fd0e50fa.slice. Jan 28 01:35:12.171466 kubelet[2938]: I0128 01:35:12.171239 2938 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 01:35:12.250472 kubelet[2938]: I0128 01:35:12.241492 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 28 01:35:12.250472 kubelet[2938]: I0128 01:35:12.241599 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6edb375440e4b291c7482c8ceb5f2e7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b6edb375440e4b291c7482c8ceb5f2e7\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:35:12.250472 kubelet[2938]: I0128 01:35:12.241631 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9305c271-45c2-41ee-99c9-5078fd0e50fa-kube-proxy\") pod \"kube-proxy-wvjmq\" (UID: \"9305c271-45c2-41ee-99c9-5078fd0e50fa\") " pod="kube-system/kube-proxy-wvjmq" Jan 28 01:35:12.250472 kubelet[2938]: I0128 01:35:12.241656 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlwjz\" (UniqueName: \"kubernetes.io/projected/9305c271-45c2-41ee-99c9-5078fd0e50fa-kube-api-access-wlwjz\") pod \"kube-proxy-wvjmq\" (UID: \"9305c271-45c2-41ee-99c9-5078fd0e50fa\") " pod="kube-system/kube-proxy-wvjmq" Jan 28 01:35:12.250472 kubelet[2938]: I0128 01:35:12.241683 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:35:12.251596 kubelet[2938]: I0128 01:35:12.241707 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:35:12.251596 kubelet[2938]: I0128 01:35:12.241727 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:35:12.251596 kubelet[2938]: I0128 01:35:12.241746 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6edb375440e4b291c7482c8ceb5f2e7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b6edb375440e4b291c7482c8ceb5f2e7\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:35:12.251596 kubelet[2938]: I0128 01:35:12.241777 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6edb375440e4b291c7482c8ceb5f2e7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b6edb375440e4b291c7482c8ceb5f2e7\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:35:12.251596 kubelet[2938]: I0128 01:35:12.241799 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9305c271-45c2-41ee-99c9-5078fd0e50fa-xtables-lock\") pod \"kube-proxy-wvjmq\" (UID: \"9305c271-45c2-41ee-99c9-5078fd0e50fa\") " pod="kube-system/kube-proxy-wvjmq" Jan 28 01:35:12.251775 kubelet[2938]: I0128 01:35:12.241833 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9305c271-45c2-41ee-99c9-5078fd0e50fa-lib-modules\") pod \"kube-proxy-wvjmq\" (UID: \"9305c271-45c2-41ee-99c9-5078fd0e50fa\") " pod="kube-system/kube-proxy-wvjmq" Jan 28 01:35:12.251775 kubelet[2938]: I0128 01:35:12.241853 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:35:12.251775 kubelet[2938]: I0128 01:35:12.241913 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:35:12.273963 kubelet[2938]: I0128 01:35:12.270884 2938 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 28 01:35:12.284023 kubelet[2938]: I0128 01:35:12.280372 2938 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 01:35:12.386563 kubelet[2938]: E0128 01:35:12.381851 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:12.387471 kubelet[2938]: E0128 01:35:12.387241 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:12.401735 kubelet[2938]: E0128 01:35:12.394756 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:12.510171 kubelet[2938]: E0128 01:35:12.507066 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:12.527038 kubelet[2938]: E0128 01:35:12.509807 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:12.547979 kubelet[2938]: E0128 01:35:12.534863 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:12.550388 containerd[1612]: time="2026-01-28T01:35:12.536000412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wvjmq,Uid:9305c271-45c2-41ee-99c9-5078fd0e50fa,Namespace:kube-system,Attempt:0,}" Jan 28 01:35:12.858721 containerd[1612]: time="2026-01-28T01:35:12.822683455Z" level=info msg="connecting to shim 5d69cc9e2dfd16078a5848bcfcf1bb3157b227a9236caf264720f83f2210553f" address="unix:///run/containerd/s/710ca46c1553fe1a3cd3410948b8d3e7e4bbefffafe4b6c36424c4f0311a3364" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:35:13.354669 systemd[1]: Started cri-containerd-5d69cc9e2dfd16078a5848bcfcf1bb3157b227a9236caf264720f83f2210553f.scope - libcontainer container 5d69cc9e2dfd16078a5848bcfcf1bb3157b227a9236caf264720f83f2210553f. Jan 28 01:35:13.935232 kubelet[2938]: E0128 01:35:13.931634 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:13.935232 kubelet[2938]: E0128 01:35:13.932559 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:14.781000 audit: BPF prog-id=133 op=LOAD Jan 28 01:35:14.796772 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 28 01:35:14.796970 kernel: audit: type=1334 audit(1769564114.781:454): prog-id=133 op=LOAD Jan 28 01:35:15.028000 audit: BPF prog-id=134 op=LOAD Jan 28 01:35:15.063083 kernel: audit: type=1334 audit(1769564115.028:455): prog-id=134 op=LOAD Jan 28 01:35:15.114768 kernel: audit: type=1300 audit(1769564115.028:455): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000190238 a2=98 a3=0 items=0 ppid=2990 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:15.028000 audit[3000]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000190238 a2=98 a3=0 items=0 ppid=2990 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:15.116771 kubelet[2938]: E0128 01:35:15.046206 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:15.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564363963633965326466643136303738613538343862636663663162 Jan 28 01:35:15.029000 audit: BPF prog-id=134 op=UNLOAD Jan 28 01:35:15.308640 kernel: audit: type=1327 audit(1769564115.028:455): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564363963633965326466643136303738613538343862636663663162 Jan 28 01:35:15.308755 kernel: audit: type=1334 audit(1769564115.029:456): prog-id=134 op=UNLOAD Jan 28 01:35:15.029000 audit[3000]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2990 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:15.386043 kernel: audit: type=1300 audit(1769564115.029:456): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2990 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:15.446843 kernel: audit: type=1327 audit(1769564115.029:456): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564363963633965326466643136303738613538343862636663663162 Jan 28 01:35:15.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564363963633965326466643136303738613538343862636663663162 Jan 28 01:35:15.029000 audit: BPF prog-id=135 op=LOAD Jan 28 01:35:15.460678 kernel: audit: type=1334 audit(1769564115.029:457): prog-id=135 op=LOAD Jan 28 01:35:15.029000 audit[3000]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000190488 a2=98 a3=0 items=0 ppid=2990 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:15.509419 kernel: audit: type=1300 audit(1769564115.029:457): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000190488 a2=98 a3=0 items=0 ppid=2990 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:15.509544 kernel: audit: type=1327 audit(1769564115.029:457): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564363963633965326466643136303738613538343862636663663162 Jan 28 01:35:15.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564363963633965326466643136303738613538343862636663663162 Jan 28 01:35:15.509666 kubelet[2938]: E0128 01:35:15.498689 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:15.029000 audit: BPF prog-id=136 op=LOAD Jan 28 01:35:15.029000 audit[3000]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000190218 a2=98 a3=0 items=0 ppid=2990 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:15.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564363963633965326466643136303738613538343862636663663162 Jan 28 01:35:15.029000 audit: BPF prog-id=136 op=UNLOAD Jan 28 01:35:15.029000 audit[3000]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2990 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:15.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564363963633965326466643136303738613538343862636663663162 Jan 28 01:35:15.029000 audit: BPF prog-id=135 op=UNLOAD Jan 28 01:35:15.029000 audit[3000]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2990 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:15.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564363963633965326466643136303738613538343862636663663162 Jan 28 01:35:15.029000 audit: BPF prog-id=137 op=LOAD Jan 28 01:35:15.029000 audit[3000]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001906e8 a2=98 a3=0 items=0 ppid=2990 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:15.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564363963633965326466643136303738613538343862636663663162 Jan 28 01:35:16.126866 containerd[1612]: time="2026-01-28T01:35:16.113616601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wvjmq,Uid:9305c271-45c2-41ee-99c9-5078fd0e50fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d69cc9e2dfd16078a5848bcfcf1bb3157b227a9236caf264720f83f2210553f\"" Jan 28 01:35:16.361521 kubelet[2938]: E0128 01:35:16.358930 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:16.569603 containerd[1612]: time="2026-01-28T01:35:16.569546709Z" level=info msg="CreateContainer within sandbox \"5d69cc9e2dfd16078a5848bcfcf1bb3157b227a9236caf264720f83f2210553f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 01:35:16.653972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1349869709.mount: Deactivated successfully. Jan 28 01:35:16.660645 containerd[1612]: time="2026-01-28T01:35:16.660442664Z" level=info msg="Container 888236a94dca2c8c926aa09e38eab55768b815c90473277ee9d363333562b814: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:35:16.715216 containerd[1612]: time="2026-01-28T01:35:16.715045421Z" level=info msg="CreateContainer within sandbox \"5d69cc9e2dfd16078a5848bcfcf1bb3157b227a9236caf264720f83f2210553f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"888236a94dca2c8c926aa09e38eab55768b815c90473277ee9d363333562b814\"" Jan 28 01:35:16.728860 containerd[1612]: time="2026-01-28T01:35:16.727933180Z" level=info msg="StartContainer for \"888236a94dca2c8c926aa09e38eab55768b815c90473277ee9d363333562b814\"" Jan 28 01:35:16.781963 containerd[1612]: time="2026-01-28T01:35:16.781734569Z" level=info msg="connecting to shim 888236a94dca2c8c926aa09e38eab55768b815c90473277ee9d363333562b814" address="unix:///run/containerd/s/710ca46c1553fe1a3cd3410948b8d3e7e4bbefffafe4b6c36424c4f0311a3364" protocol=ttrpc version=3 Jan 28 01:35:16.957711 systemd[1]: Started cri-containerd-888236a94dca2c8c926aa09e38eab55768b815c90473277ee9d363333562b814.scope - libcontainer container 888236a94dca2c8c926aa09e38eab55768b815c90473277ee9d363333562b814. Jan 28 01:35:17.748000 audit: BPF prog-id=138 op=LOAD Jan 28 01:35:17.748000 audit[3028]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2990 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:17.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383233366139346463613263386339323661613039653338656162 Jan 28 01:35:17.748000 audit: BPF prog-id=139 op=LOAD Jan 28 01:35:17.748000 audit[3028]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2990 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:17.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383233366139346463613263386339323661613039653338656162 Jan 28 01:35:17.748000 audit: BPF prog-id=139 op=UNLOAD Jan 28 01:35:17.748000 audit[3028]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2990 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:17.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383233366139346463613263386339323661613039653338656162 Jan 28 01:35:17.749000 audit: BPF prog-id=138 op=UNLOAD Jan 28 01:35:17.749000 audit[3028]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2990 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:17.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383233366139346463613263386339323661613039653338656162 Jan 28 01:35:17.749000 audit: BPF prog-id=140 op=LOAD Jan 28 01:35:17.749000 audit[3028]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2990 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:17.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383233366139346463613263386339323661613039653338656162 Jan 28 01:35:17.809496 kubelet[2938]: E0128 01:35:17.804929 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:18.035999 containerd[1612]: time="2026-01-28T01:35:18.035519786Z" level=info msg="StartContainer for \"888236a94dca2c8c926aa09e38eab55768b815c90473277ee9d363333562b814\" returns successfully" Jan 28 01:35:18.700391 kubelet[2938]: E0128 01:35:18.683238 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:18.716811 kubelet[2938]: E0128 01:35:18.683420 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:19.694698 kubelet[2938]: E0128 01:35:19.694656 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:20.727451 kubelet[2938]: I0128 01:35:20.717878 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wvjmq" podStartSLOduration=10.717850296 podStartE2EDuration="10.717850296s" podCreationTimestamp="2026-01-28 01:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:35:18.871940846 +0000 UTC m=+9.633413751" watchObservedRunningTime="2026-01-28 01:35:20.717850296 +0000 UTC m=+11.479323191" Jan 28 01:35:20.979640 kubelet[2938]: I0128 01:35:20.978360 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/60ecebbb-29f6-459e-8a9c-8a29f34fe394-var-lib-calico\") pod \"tigera-operator-7dcd859c48-7wqcs\" (UID: \"60ecebbb-29f6-459e-8a9c-8a29f34fe394\") " pod="tigera-operator/tigera-operator-7dcd859c48-7wqcs" Jan 28 01:35:20.979640 kubelet[2938]: I0128 01:35:20.978926 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7g67\" (UniqueName: \"kubernetes.io/projected/60ecebbb-29f6-459e-8a9c-8a29f34fe394-kube-api-access-c7g67\") pod \"tigera-operator-7dcd859c48-7wqcs\" (UID: \"60ecebbb-29f6-459e-8a9c-8a29f34fe394\") " pod="tigera-operator/tigera-operator-7dcd859c48-7wqcs" Jan 28 01:35:21.046034 systemd[1]: Created slice kubepods-besteffort-pod60ecebbb_29f6_459e_8a9c_8a29f34fe394.slice - libcontainer container kubepods-besteffort-pod60ecebbb_29f6_459e_8a9c_8a29f34fe394.slice. Jan 28 01:35:21.391577 containerd[1612]: time="2026-01-28T01:35:21.388767413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-7wqcs,Uid:60ecebbb-29f6-459e-8a9c-8a29f34fe394,Namespace:tigera-operator,Attempt:0,}" Jan 28 01:35:21.865603 containerd[1612]: time="2026-01-28T01:35:21.865545620Z" level=info msg="connecting to shim 9620265baadedb1921ed90b2995e49d9956007ca01ce33021107b86d4fc2f5ba" address="unix:///run/containerd/s/ce2346db093c1e336250475a67b2ed00b51378284e1795f33c44f422c5263262" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:35:23.800900 systemd[1]: Started cri-containerd-9620265baadedb1921ed90b2995e49d9956007ca01ce33021107b86d4fc2f5ba.scope - libcontainer container 9620265baadedb1921ed90b2995e49d9956007ca01ce33021107b86d4fc2f5ba. Jan 28 01:35:24.231000 audit: BPF prog-id=141 op=LOAD Jan 28 01:35:24.270386 kernel: kauditd_printk_skb: 27 callbacks suppressed Jan 28 01:35:24.270540 kernel: audit: type=1334 audit(1769564124.231:467): prog-id=141 op=LOAD Jan 28 01:35:24.231000 audit: BPF prog-id=142 op=LOAD Jan 28 01:35:24.369404 kernel: audit: type=1334 audit(1769564124.231:468): prog-id=142 op=LOAD Jan 28 01:35:24.369565 kernel: audit: type=1300 audit(1769564124.231:468): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000190238 a2=98 a3=0 items=0 ppid=3086 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:24.231000 audit[3104]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000190238 a2=98 a3=0 items=0 ppid=3086 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:24.231000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936323032363562616164656462313932316564393062323939356534 Jan 28 01:35:24.600038 kernel: audit: type=1327 audit(1769564124.231:468): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936323032363562616164656462313932316564393062323939356534 Jan 28 01:35:24.679238 kernel: audit: type=1334 audit(1769564124.231:469): prog-id=142 op=UNLOAD Jan 28 01:35:24.683766 kernel: audit: type=1300 audit(1769564124.231:469): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:24.231000 audit: BPF prog-id=142 op=UNLOAD Jan 28 01:35:24.231000 audit[3104]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:24.778234 kernel: audit: type=1327 audit(1769564124.231:469): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936323032363562616164656462313932316564393062323939356534 Jan 28 01:35:24.231000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936323032363562616164656462313932316564393062323939356534 Jan 28 01:35:24.985644 kernel: audit: type=1334 audit(1769564124.231:470): prog-id=143 op=LOAD Jan 28 01:35:25.013472 kernel: audit: type=1300 audit(1769564124.231:470): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000190488 a2=98 a3=0 items=0 ppid=3086 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:24.231000 audit: BPF prog-id=143 op=LOAD Jan 28 01:35:24.231000 audit[3104]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000190488 a2=98 a3=0 items=0 ppid=3086 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:25.297476 kernel: audit: type=1327 audit(1769564124.231:470): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936323032363562616164656462313932316564393062323939356534 Jan 28 01:35:24.231000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936323032363562616164656462313932316564393062323939356534 Jan 28 01:35:24.231000 audit: BPF prog-id=144 op=LOAD Jan 28 01:35:24.231000 audit[3104]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000190218 a2=98 a3=0 items=0 ppid=3086 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:24.231000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936323032363562616164656462313932316564393062323939356534 Jan 28 01:35:24.231000 audit: BPF prog-id=144 op=UNLOAD Jan 28 01:35:24.231000 audit[3104]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:24.231000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936323032363562616164656462313932316564393062323939356534 Jan 28 01:35:24.231000 audit: BPF prog-id=143 op=UNLOAD Jan 28 01:35:24.231000 audit[3104]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:24.231000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936323032363562616164656462313932316564393062323939356534 Jan 28 01:35:24.231000 audit: BPF prog-id=145 op=LOAD Jan 28 01:35:24.231000 audit[3104]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001906e8 a2=98 a3=0 items=0 ppid=3086 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:24.231000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936323032363562616164656462313932316564393062323939356534 Jan 28 01:35:25.115000 audit[3143]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:35:25.115000 audit[3143]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdaf4dc710 a2=0 a3=7ffdaf4dc6fc items=0 ppid=3042 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:25.115000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 28 01:35:25.176000 audit[3144]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3144 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:35:25.176000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe56e6d620 a2=0 a3=7ffe56e6d60c items=0 ppid=3042 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:25.176000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 28 01:35:25.309000 audit[3145]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_chain pid=3145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:35:25.309000 audit[3145]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc39863930 a2=0 a3=7ffc3986391c items=0 ppid=3042 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:25.309000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 28 01:35:25.464000 audit[3150]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=3150 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:25.464000 audit[3150]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc712a910 a2=0 a3=7ffcc712a8fc items=0 ppid=3042 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:25.464000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 28 01:35:25.640000 audit[3152]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3152 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:25.640000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfbdaf8a0 a2=0 a3=7ffdfbdaf88c items=0 ppid=3042 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:25.640000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 28 01:35:25.659331 kubelet[2938]: E0128 01:35:25.655229 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:25.660564 containerd[1612]: time="2026-01-28T01:35:25.659823082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-7wqcs,Uid:60ecebbb-29f6-459e-8a9c-8a29f34fe394,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9620265baadedb1921ed90b2995e49d9956007ca01ce33021107b86d4fc2f5ba\"" Jan 28 01:35:25.688482 containerd[1612]: time="2026-01-28T01:35:25.680536674Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 28 01:35:25.723000 audit[3153]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3153 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:25.723000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffde9bae1f0 a2=0 a3=7ffde9bae1dc items=0 ppid=3042 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:25.723000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 28 01:35:25.888000 audit[3154]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3154 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:35:25.888000 audit[3154]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcec311460 a2=0 a3=7ffcec31144c items=0 ppid=3042 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:25.888000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 28 01:35:25.966000 audit[3157]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3157 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:35:25.966000 audit[3157]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd1fbf4bd0 a2=0 a3=7ffd1fbf4bbc items=0 ppid=3042 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:25.966000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 28 01:35:26.106000 audit[3160]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3160 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:35:26.106000 audit[3160]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc890df5f0 a2=0 a3=7ffc890df5dc items=0 ppid=3042 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:26.106000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 28 01:35:26.158000 audit[3161]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3161 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:35:26.158000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc99471b30 a2=0 a3=7ffc99471b1c items=0 ppid=3042 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:26.158000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 28 01:35:26.206000 audit[3163]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3163 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:35:26.206000 audit[3163]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcaf0c5fb0 a2=0 a3=7ffcaf0c5f9c items=0 ppid=3042 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:26.206000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 28 01:35:26.592000 audit[3164]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3164 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:35:26.592000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff13b44c70 a2=0 a3=7fff13b44c5c items=0 ppid=3042 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:26.592000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 28 01:35:26.686000 audit[3166]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3166 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:35:26.686000 audit[3166]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffeae6417f0 a2=0 a3=7ffeae6417dc items=0 ppid=3042 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:26.686000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 28 01:35:26.819000 audit[3169]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3169 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:35:26.819000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff8e1436b0 a2=0 a3=7fff8e14369c items=0 ppid=3042 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:26.819000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 28 01:35:26.829000 audit[3170]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3170 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:35:26.829000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe40266820 a2=0 a3=7ffe4026680c items=0 ppid=3042 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:26.829000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 28 01:35:26.860000 audit[3172]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3172 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:35:26.860000 audit[3172]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdacdff920 a2=0 a3=7ffdacdff90c items=0 ppid=3042 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:26.860000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 28 01:35:26.877000 audit[3173]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3173 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:35:26.877000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffed825b7b0 a2=0 a3=7ffed825b79c items=0 ppid=3042 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:26.877000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 28 01:35:26.929000 audit[3175]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3175 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:35:26.929000 audit[3175]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe520ab890 a2=0 a3=7ffe520ab87c items=0 ppid=3042 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:26.929000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 01:35:27.202000 audit[3178]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3178 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:35:27.202000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff9c226490 a2=0 a3=7fff9c22647c items=0 ppid=3042 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:27.202000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 01:35:27.511000 audit[3181]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:35:27.511000 audit[3181]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc543f3770 a2=0 a3=7ffc543f375c items=0 ppid=3042 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:27.511000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 28 01:35:27.517000 audit[3182]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:35:27.517000 audit[3182]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeb182f5a0 a2=0 a3=7ffeb182f58c items=0 ppid=3042 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:27.517000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 28 01:35:27.559000 audit[3184]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:35:27.559000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc40c2f840 a2=0 a3=7ffc40c2f82c items=0 ppid=3042 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:27.559000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:35:27.600000 audit[3187]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3187 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:35:27.600000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe36b9a140 a2=0 a3=7ffe36b9a12c items=0 ppid=3042 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:27.600000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:35:27.630000 audit[3188]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:35:27.630000 audit[3188]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd13cf6700 a2=0 a3=7ffd13cf66ec items=0 ppid=3042 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:27.630000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 28 01:35:27.662000 audit[3190]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3190 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:35:27.662000 audit[3190]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc68070310 a2=0 a3=7ffc680702fc items=0 ppid=3042 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:27.662000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 28 01:35:28.259000 audit[3196]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3196 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:35:28.259000 audit[3196]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdee9b5e60 a2=0 a3=7ffdee9b5e4c items=0 ppid=3042 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:28.259000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:35:30.123521 kernel: kauditd_printk_skb: 90 callbacks suppressed Jan 28 01:35:30.163550 kernel: audit: type=1325 audit(1769564129.237:501): table=nat:80 family=2 entries=14 op=nft_register_chain pid=3196 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:35:30.164150 kernel: audit: type=1300 audit(1769564129.237:501): arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffdee9b5e60 a2=0 a3=7ffdee9b5e4c items=0 ppid=3042 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:30.164517 kernel: audit: type=1327 audit(1769564129.237:501): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:35:29.237000 audit[3196]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3196 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:35:29.237000 audit[3196]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffdee9b5e60 a2=0 a3=7ffdee9b5e4c items=0 ppid=3042 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:29.237000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:35:30.334000 audit[3203]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3203 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:30.379008 kernel: audit: type=1325 audit(1769564130.334:502): table=filter:81 family=10 entries=1 op=nft_register_chain pid=3203 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:30.334000 audit[3203]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe911220e0 a2=0 a3=7ffe911220cc items=0 ppid=3042 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:30.428022 kernel: audit: type=1300 audit(1769564130.334:502): arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe911220e0 a2=0 a3=7ffe911220cc items=0 ppid=3042 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:30.334000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 28 01:35:30.490123 kernel: audit: type=1327 audit(1769564130.334:502): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 28 01:35:30.490394 kernel: audit: type=1325 audit(1769564130.434:503): table=filter:82 family=10 entries=2 op=nft_register_chain pid=3205 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:30.434000 audit[3205]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3205 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:30.560896 kernel: audit: type=1300 audit(1769564130.434:503): arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc84c5a9c0 a2=0 a3=7ffc84c5a9ac items=0 ppid=3042 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:30.434000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc84c5a9c0 a2=0 a3=7ffc84c5a9ac items=0 ppid=3042 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:30.610115 kernel: audit: type=1327 audit(1769564130.434:503): proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 28 01:35:30.434000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 28 01:35:30.663002 kernel: audit: type=1325 audit(1769564130.475:504): table=filter:83 family=10 entries=1 op=nft_register_rule pid=3208 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:30.475000 audit[3208]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3208 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:30.475000 audit[3208]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffc5a94830 a2=0 a3=7fffc5a9481c items=0 ppid=3042 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:30.475000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 28 01:35:30.479000 audit[3209]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3209 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:30.479000 audit[3209]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8c299120 a2=0 a3=7ffc8c29910c items=0 ppid=3042 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:30.479000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 28 01:35:30.521000 audit[3211]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3211 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:30.521000 audit[3211]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff625fbe20 a2=0 a3=7fff625fbe0c items=0 ppid=3042 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:30.521000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 28 01:35:30.563000 audit[3212]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3212 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:30.563000 audit[3212]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd3c7b95b0 a2=0 a3=7ffd3c7b959c items=0 ppid=3042 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:30.563000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 28 01:35:30.585000 audit[3214]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3214 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:30.585000 audit[3214]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffeab8e41e0 a2=0 a3=7ffeab8e41cc items=0 ppid=3042 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:30.585000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 28 01:35:30.605000 audit[3217]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3217 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:30.605000 audit[3217]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe620e78b0 a2=0 a3=7ffe620e789c items=0 ppid=3042 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:30.605000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 28 01:35:30.612000 audit[3218]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3218 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:30.612000 audit[3218]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc5170ed0 a2=0 a3=7ffdc5170ebc items=0 ppid=3042 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:30.612000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 28 01:35:30.704000 audit[3220]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3220 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:30.704000 audit[3220]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcb3ae2310 a2=0 a3=7ffcb3ae22fc items=0 ppid=3042 pid=3220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:30.704000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 28 01:35:30.735000 audit[3221]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3221 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:30.735000 audit[3221]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff8b948870 a2=0 a3=7fff8b94885c items=0 ppid=3042 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:30.735000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 28 01:35:30.764000 audit[3223]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3223 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:30.764000 audit[3223]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd6dfbb810 a2=0 a3=7ffd6dfbb7fc items=0 ppid=3042 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:30.764000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 01:35:30.795000 audit[3226]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3226 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:30.795000 audit[3226]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd14139fe0 a2=0 a3=7ffd14139fcc items=0 ppid=3042 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:30.795000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 28 01:35:30.869000 audit[3229]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3229 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:30.869000 audit[3229]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff98e3ddb0 a2=0 a3=7fff98e3dd9c items=0 ppid=3042 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:30.869000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 28 01:35:30.889000 audit[3230]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3230 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:30.889000 audit[3230]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffde4bfdaf0 a2=0 a3=7ffde4bfdadc items=0 ppid=3042 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:30.889000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 28 01:35:30.908000 audit[3232]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:30.908000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe496f5490 a2=0 a3=7ffe496f547c items=0 ppid=3042 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:30.908000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:35:30.977000 audit[3235]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3235 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:30.977000 audit[3235]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffde178d480 a2=0 a3=7ffde178d46c items=0 ppid=3042 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:30.977000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:35:31.005000 audit[3236]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3236 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:31.005000 audit[3236]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb9e1a430 a2=0 a3=7ffcb9e1a41c items=0 ppid=3042 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:31.005000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 28 01:35:31.067000 audit[3238]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3238 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:31.067000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffebaee6520 a2=0 a3=7ffebaee650c items=0 ppid=3042 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:31.067000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 28 01:35:31.104000 audit[3240]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3240 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:31.104000 audit[3240]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb9c353e0 a2=0 a3=7ffeb9c353cc items=0 ppid=3042 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:31.104000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 01:35:31.135000 audit[3245]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3245 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:31.135000 audit[3245]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd23800a50 a2=0 a3=7ffd23800a3c items=0 ppid=3042 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:31.135000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 01:35:31.186000 audit[3248]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3248 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:35:31.186000 audit[3248]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd00ff7500 a2=0 a3=7ffd00ff74ec items=0 ppid=3042 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:31.186000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 01:35:31.357000 audit[3250]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3250 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 28 01:35:31.357000 audit[3250]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc73551190 a2=0 a3=7ffc7355117c items=0 ppid=3042 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:31.357000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:35:31.358000 audit[3250]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3250 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 28 01:35:31.358000 audit[3250]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc73551190 a2=0 a3=7ffc7355117c items=0 ppid=3042 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:31.358000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:35:31.527888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3045141388.mount: Deactivated successfully. Jan 28 01:35:47.619873 containerd[1612]: time="2026-01-28T01:35:47.618791438Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:35:47.643659 containerd[1612]: time="2026-01-28T01:35:47.641786477Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23559564" Jan 28 01:35:47.647147 containerd[1612]: time="2026-01-28T01:35:47.645803803Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:35:47.655938 containerd[1612]: time="2026-01-28T01:35:47.654191144Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:35:47.655938 containerd[1612]: time="2026-01-28T01:35:47.655452398Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 21.974868035s" Jan 28 01:35:47.655938 containerd[1612]: time="2026-01-28T01:35:47.655486522Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 28 01:35:47.668791 containerd[1612]: time="2026-01-28T01:35:47.668700101Z" level=info msg="CreateContainer within sandbox \"9620265baadedb1921ed90b2995e49d9956007ca01ce33021107b86d4fc2f5ba\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 28 01:35:47.722055 containerd[1612]: time="2026-01-28T01:35:47.720658737Z" level=info msg="Container e1608167b8d5a77019c1c3152bbf1cdb1e6b7e70266211189ee0553a74313f3c: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:35:47.806350 containerd[1612]: time="2026-01-28T01:35:47.806121338Z" level=info msg="CreateContainer within sandbox \"9620265baadedb1921ed90b2995e49d9956007ca01ce33021107b86d4fc2f5ba\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e1608167b8d5a77019c1c3152bbf1cdb1e6b7e70266211189ee0553a74313f3c\"" Jan 28 01:35:47.818822 containerd[1612]: time="2026-01-28T01:35:47.818372689Z" level=info msg="StartContainer for \"e1608167b8d5a77019c1c3152bbf1cdb1e6b7e70266211189ee0553a74313f3c\"" Jan 28 01:35:47.830134 containerd[1612]: time="2026-01-28T01:35:47.829910750Z" level=info msg="connecting to shim e1608167b8d5a77019c1c3152bbf1cdb1e6b7e70266211189ee0553a74313f3c" address="unix:///run/containerd/s/ce2346db093c1e336250475a67b2ed00b51378284e1795f33c44f422c5263262" protocol=ttrpc version=3 Jan 28 01:35:47.991612 systemd[1]: Started cri-containerd-e1608167b8d5a77019c1c3152bbf1cdb1e6b7e70266211189ee0553a74313f3c.scope - libcontainer container e1608167b8d5a77019c1c3152bbf1cdb1e6b7e70266211189ee0553a74313f3c. Jan 28 01:35:48.097000 audit: BPF prog-id=146 op=LOAD Jan 28 01:35:48.113454 kernel: kauditd_printk_skb: 65 callbacks suppressed Jan 28 01:35:48.123863 kernel: audit: type=1334 audit(1769564148.097:526): prog-id=146 op=LOAD Jan 28 01:35:48.123928 kernel: audit: type=1334 audit(1769564148.099:527): prog-id=147 op=LOAD Jan 28 01:35:48.124059 kernel: audit: type=1300 audit(1769564148.099:527): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=3086 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:48.099000 audit: BPF prog-id=147 op=LOAD Jan 28 01:35:48.099000 audit[3257]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=3086 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:48.146138 kernel: audit: type=1327 audit(1769564148.099:527): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531363038313637623864356137373031396331633331353262626631 Jan 28 01:35:48.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531363038313637623864356137373031396331633331353262626631 Jan 28 01:35:48.190659 kernel: audit: type=1334 audit(1769564148.099:528): prog-id=147 op=UNLOAD Jan 28 01:35:48.202784 kernel: audit: type=1300 audit(1769564148.099:528): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:48.099000 audit: BPF prog-id=147 op=UNLOAD Jan 28 01:35:48.099000 audit[3257]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:48.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531363038313637623864356137373031396331633331353262626631 Jan 28 01:35:48.330460 kernel: audit: type=1327 audit(1769564148.099:528): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531363038313637623864356137373031396331633331353262626631 Jan 28 01:35:48.336399 kernel: audit: type=1334 audit(1769564148.100:529): prog-id=148 op=LOAD Jan 28 01:35:48.100000 audit: BPF prog-id=148 op=LOAD Jan 28 01:35:48.100000 audit[3257]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3086 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:48.387449 kernel: audit: type=1300 audit(1769564148.100:529): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3086 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:48.387593 kernel: audit: type=1327 audit(1769564148.100:529): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531363038313637623864356137373031396331633331353262626631 Jan 28 01:35:48.100000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531363038313637623864356137373031396331633331353262626631 Jan 28 01:35:48.100000 audit: BPF prog-id=149 op=LOAD Jan 28 01:35:48.100000 audit[3257]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3086 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:48.100000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531363038313637623864356137373031396331633331353262626631 Jan 28 01:35:48.100000 audit: BPF prog-id=149 op=UNLOAD Jan 28 01:35:48.100000 audit[3257]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:48.100000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531363038313637623864356137373031396331633331353262626631 Jan 28 01:35:48.100000 audit: BPF prog-id=148 op=UNLOAD Jan 28 01:35:48.100000 audit[3257]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:48.100000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531363038313637623864356137373031396331633331353262626631 Jan 28 01:35:48.100000 audit: BPF prog-id=150 op=LOAD Jan 28 01:35:48.100000 audit[3257]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3086 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:48.100000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531363038313637623864356137373031396331633331353262626631 Jan 28 01:35:48.525210 containerd[1612]: time="2026-01-28T01:35:48.524563954Z" level=info msg="StartContainer for \"e1608167b8d5a77019c1c3152bbf1cdb1e6b7e70266211189ee0553a74313f3c\" returns successfully" Jan 28 01:35:48.847098 kubelet[2938]: I0128 01:35:48.838730 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-7wqcs" podStartSLOduration=6.857467355 podStartE2EDuration="28.838706979s" podCreationTimestamp="2026-01-28 01:35:20 +0000 UTC" firstStartedPulling="2026-01-28 01:35:25.676761921 +0000 UTC m=+16.438234806" lastFinishedPulling="2026-01-28 01:35:47.658001545 +0000 UTC m=+38.419474430" observedRunningTime="2026-01-28 01:35:48.833033963 +0000 UTC m=+39.594506868" watchObservedRunningTime="2026-01-28 01:35:48.838706979 +0000 UTC m=+39.600179865" Jan 28 01:36:06.374219 sudo[1839]: pam_unix(sudo:session): session closed for user root Jan 28 01:36:06.611144 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 28 01:36:06.611474 kernel: audit: type=1106 audit(1769564166.380:534): pid=1839 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:36:06.380000 audit[1839]: USER_END pid=1839 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:36:06.613000 audit[1839]: CRED_DISP pid=1839 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:36:06.682051 kernel: audit: type=1104 audit(1769564166.613:535): pid=1839 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:36:06.716863 sshd[1838]: Connection closed by 10.0.0.1 port 41872 Jan 28 01:36:06.745651 sshd-session[1833]: pam_unix(sshd:session): session closed for user core Jan 28 01:36:06.896683 kernel: audit: type=1106 audit(1769564166.786:536): pid=1833 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:06.896831 kernel: audit: type=1104 audit(1769564166.805:537): pid=1833 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:06.786000 audit[1833]: USER_END pid=1833 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:06.805000 audit[1833]: CRED_DISP pid=1833 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:06.833689 systemd[1]: sshd@7-10.0.0.88:22-10.0.0.1:41872.service: Deactivated successfully. Jan 28 01:36:06.858520 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 01:36:06.858994 systemd[1]: session-8.scope: Consumed 13.835s CPU time, 217.9M memory peak. Jan 28 01:36:06.898157 systemd-logind[1594]: Session 8 logged out. Waiting for processes to exit. Jan 28 01:36:06.901067 systemd-logind[1594]: Removed session 8. Jan 28 01:36:06.946656 kernel: audit: type=1131 audit(1769564166.833:538): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.88:22-10.0.0.1:41872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:36:06.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.88:22-10.0.0.1:41872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:36:08.723104 update_engine[1599]: I20260128 01:36:08.711220 1599 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 28 01:36:08.723104 update_engine[1599]: I20260128 01:36:08.714788 1599 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 28 01:36:08.723104 update_engine[1599]: I20260128 01:36:08.722222 1599 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 28 01:36:08.778667 update_engine[1599]: I20260128 01:36:08.756115 1599 omaha_request_params.cc:62] Current group set to beta Jan 28 01:36:08.778667 update_engine[1599]: I20260128 01:36:08.767652 1599 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 28 01:36:08.778667 update_engine[1599]: I20260128 01:36:08.768093 1599 update_attempter.cc:643] Scheduling an action processor start. Jan 28 01:36:08.778667 update_engine[1599]: I20260128 01:36:08.768131 1599 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 28 01:36:08.781014 update_engine[1599]: I20260128 01:36:08.780911 1599 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 28 01:36:08.781463 update_engine[1599]: I20260128 01:36:08.781172 1599 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 28 01:36:08.789400 update_engine[1599]: I20260128 01:36:08.781245 1599 omaha_request_action.cc:272] Request: Jan 28 01:36:08.789400 update_engine[1599]: Jan 28 01:36:08.789400 update_engine[1599]: Jan 28 01:36:08.789400 update_engine[1599]: Jan 28 01:36:08.789400 update_engine[1599]: Jan 28 01:36:08.789400 update_engine[1599]: Jan 28 01:36:08.789400 update_engine[1599]: Jan 28 01:36:08.789400 update_engine[1599]: Jan 28 01:36:08.789400 update_engine[1599]: Jan 28 01:36:08.789400 update_engine[1599]: I20260128 01:36:08.782563 1599 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:36:08.796059 locksmithd[1656]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 28 01:36:08.823459 update_engine[1599]: I20260128 01:36:08.821487 1599 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:36:08.823657 update_engine[1599]: I20260128 01:36:08.823231 1599 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:36:08.866629 update_engine[1599]: E20260128 01:36:08.866554 1599 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 01:36:08.867033 update_engine[1599]: I20260128 01:36:08.866994 1599 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 28 01:36:08.959486 kernel: audit: type=1325 audit(1769564168.924:539): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3352 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:36:08.924000 audit[3352]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3352 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:36:08.924000 audit[3352]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff9966f0d0 a2=0 a3=7fff9966f0bc items=0 ppid=3042 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:09.090740 kernel: audit: type=1300 audit(1769564168.924:539): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff9966f0d0 a2=0 a3=7fff9966f0bc items=0 ppid=3042 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:09.098827 kernel: audit: type=1327 audit(1769564168.924:539): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:36:09.099040 kernel: audit: type=1325 audit(1769564169.084:540): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3352 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:36:08.924000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:36:09.084000 audit[3352]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3352 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:36:09.177179 kernel: audit: type=1300 audit(1769564169.084:540): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff9966f0d0 a2=0 a3=0 items=0 ppid=3042 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:09.084000 audit[3352]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff9966f0d0 a2=0 a3=0 items=0 ppid=3042 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:09.084000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:36:10.175000 audit[3354]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3354 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:36:10.175000 audit[3354]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc95ce3ab0 a2=0 a3=7ffc95ce3a9c items=0 ppid=3042 pid=3354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:10.175000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:36:10.200000 audit[3354]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3354 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:36:10.200000 audit[3354]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc95ce3ab0 a2=0 a3=0 items=0 ppid=3042 pid=3354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:10.200000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:36:19.330625 update_engine[1599]: I20260128 01:36:19.267009 1599 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:36:19.330625 update_engine[1599]: I20260128 01:36:19.331521 1599 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:36:21.017179 update_engine[1599]: I20260128 01:36:21.000124 1599 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:36:21.106665 update_engine[1599]: E20260128 01:36:21.104959 1599 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 01:36:22.127221 update_engine[1599]: I20260128 01:36:21.212049 1599 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 28 01:36:28.621054 systemd[1]: cri-containerd-bca85ac4f8f009994a35d6e89f6ce4ac782c6cec4fd3bb8329adfd0bbf299d83.scope: Deactivated successfully. Jan 28 01:36:29.015003 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 28 01:36:29.030724 kernel: audit: type=1334 audit(1769564188.895:543): prog-id=98 op=UNLOAD Jan 28 01:36:29.031190 kernel: audit: type=1334 audit(1769564188.895:544): prog-id=102 op=UNLOAD Jan 28 01:36:28.895000 audit: BPF prog-id=98 op=UNLOAD Jan 28 01:36:28.895000 audit: BPF prog-id=102 op=UNLOAD Jan 28 01:36:28.833737 systemd[1]: cri-containerd-bca85ac4f8f009994a35d6e89f6ce4ac782c6cec4fd3bb8329adfd0bbf299d83.scope: Consumed 9.309s CPU time, 52.1M memory peak, 64K read from disk. Jan 28 01:36:29.107941 kernel: audit: type=1334 audit(1769564189.081:545): prog-id=151 op=LOAD Jan 28 01:36:29.437807 kernel: audit: type=1334 audit(1769564189.081:546): prog-id=83 op=UNLOAD Jan 28 01:36:29.081000 audit: BPF prog-id=151 op=LOAD Jan 28 01:36:29.081000 audit: BPF prog-id=83 op=UNLOAD Jan 28 01:36:31.772581 update_engine[1599]: I20260128 01:36:31.712523 1599 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:36:31.772581 update_engine[1599]: I20260128 01:36:31.715854 1599 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:36:32.894555 update_engine[1599]: I20260128 01:36:31.740934 1599 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:36:34.608213 update_engine[1599]: E20260128 01:36:32.963392 1599 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 01:36:34.608213 update_engine[1599]: I20260128 01:36:32.965677 1599 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 28 01:36:40.829000 audit: BPF prog-id=152 op=LOAD Jan 28 01:36:40.829000 audit: BPF prog-id=88 op=UNLOAD Jan 28 01:36:40.326434 systemd[1]: cri-containerd-7818e0f0981fabe3c589b306b9f44d735eba4dee07167d9aadb3ef62f716a2c6.scope: Deactivated successfully. Jan 28 01:36:40.885224 kernel: audit: type=1334 audit(1769564200.829:547): prog-id=152 op=LOAD Jan 28 01:36:40.885440 kernel: audit: type=1334 audit(1769564200.829:548): prog-id=88 op=UNLOAD Jan 28 01:36:40.885479 kernel: sched: DL replenish lagged too much Jan 28 01:36:40.821224 systemd[1]: cri-containerd-7818e0f0981fabe3c589b306b9f44d735eba4dee07167d9aadb3ef62f716a2c6.scope: Consumed 8.073s CPU time, 20.3M memory peak, 64K read from disk. Jan 28 01:36:40.911000 audit: BPF prog-id=103 op=UNLOAD Jan 28 01:36:40.963739 containerd[1612]: time="2026-01-28T01:36:40.933968826Z" level=info msg="received container exit event container_id:\"bca85ac4f8f009994a35d6e89f6ce4ac782c6cec4fd3bb8329adfd0bbf299d83\" id:\"bca85ac4f8f009994a35d6e89f6ce4ac782c6cec4fd3bb8329adfd0bbf299d83\" pid:2767 exit_status:1 exited_at:{seconds:1769564191 nanos:695983363}" Jan 28 01:36:40.963739 containerd[1612]: time="2026-01-28T01:36:40.901908550Z" level=error msg="post event" error="context deadline exceeded" Jan 28 01:36:40.976893 kernel: audit: type=1334 audit(1769564200.911:549): prog-id=103 op=UNLOAD Jan 28 01:36:40.911000 audit: BPF prog-id=107 op=UNLOAD Jan 28 01:36:41.092682 kernel: audit: type=1334 audit(1769564200.911:550): prog-id=107 op=UNLOAD Jan 28 01:36:41.102873 containerd[1612]: time="2026-01-28T01:36:41.032414584Z" level=error msg="ttrpc: received message on inactive stream" stream=9 Jan 28 01:36:41.167611 containerd[1612]: time="2026-01-28T01:36:41.165789932Z" level=info msg="received container exit event container_id:\"7818e0f0981fabe3c589b306b9f44d735eba4dee07167d9aadb3ef62f716a2c6\" id:\"7818e0f0981fabe3c589b306b9f44d735eba4dee07167d9aadb3ef62f716a2c6\" pid:2784 exit_status:1 exited_at:{seconds:1769564201 nanos:145919250}" Jan 28 01:36:41.498393 kubelet[2938]: E0128 01:36:41.496187 2938 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Jan 28 01:36:41.565034 kubelet[2938]: E0128 01:36:41.564419 2938 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Jan 28 01:36:41.917390 kubelet[2938]: E0128 01:36:41.878939 2938 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="24.298s" Jan 28 01:36:41.954958 kubelet[2938]: E0128 01:36:41.952003 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:41.965742 kubelet[2938]: E0128 01:36:41.965620 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:41.968836 kubelet[2938]: E0128 01:36:41.966460 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:42.791947 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7818e0f0981fabe3c589b306b9f44d735eba4dee07167d9aadb3ef62f716a2c6-rootfs.mount: Deactivated successfully. Jan 28 01:36:42.818751 kubelet[2938]: E0128 01:36:42.817225 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:42.849183 systemd[1729]: Created slice background.slice - User Background Tasks Slice. Jan 28 01:36:42.867701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bca85ac4f8f009994a35d6e89f6ce4ac782c6cec4fd3bb8329adfd0bbf299d83-rootfs.mount: Deactivated successfully. Jan 28 01:36:42.917876 systemd[1729]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 28 01:36:43.126449 systemd[1729]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 28 01:36:43.699693 update_engine[1599]: I20260128 01:36:43.693385 1599 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:36:43.699693 update_engine[1599]: I20260128 01:36:43.693698 1599 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:36:43.700917 update_engine[1599]: I20260128 01:36:43.700840 1599 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:36:43.717701 update_engine[1599]: E20260128 01:36:43.717625 1599 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 01:36:43.718096 update_engine[1599]: I20260128 01:36:43.717993 1599 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 28 01:36:43.718214 update_engine[1599]: I20260128 01:36:43.718190 1599 omaha_request_action.cc:617] Omaha request response: Jan 28 01:36:43.718701 update_engine[1599]: E20260128 01:36:43.718669 1599 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 28 01:36:43.719426 update_engine[1599]: I20260128 01:36:43.719394 1599 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 28 01:36:43.737905 update_engine[1599]: I20260128 01:36:43.725713 1599 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 01:36:43.737905 update_engine[1599]: I20260128 01:36:43.725747 1599 update_attempter.cc:306] Processing Done. Jan 28 01:36:43.737905 update_engine[1599]: E20260128 01:36:43.725899 1599 update_attempter.cc:619] Update failed. Jan 28 01:36:43.737905 update_engine[1599]: I20260128 01:36:43.725975 1599 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 28 01:36:43.737905 update_engine[1599]: I20260128 01:36:43.725991 1599 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 28 01:36:43.737905 update_engine[1599]: I20260128 01:36:43.726001 1599 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 28 01:36:43.737905 update_engine[1599]: I20260128 01:36:43.726426 1599 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 28 01:36:43.737905 update_engine[1599]: I20260128 01:36:43.730124 1599 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 28 01:36:43.737905 update_engine[1599]: I20260128 01:36:43.730149 1599 omaha_request_action.cc:272] Request: Jan 28 01:36:43.737905 update_engine[1599]: Jan 28 01:36:43.737905 update_engine[1599]: Jan 28 01:36:43.737905 update_engine[1599]: Jan 28 01:36:43.737905 update_engine[1599]: Jan 28 01:36:43.737905 update_engine[1599]: Jan 28 01:36:43.737905 update_engine[1599]: Jan 28 01:36:43.737905 update_engine[1599]: I20260128 01:36:43.730160 1599 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:36:43.737905 update_engine[1599]: I20260128 01:36:43.730204 1599 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:36:43.737905 update_engine[1599]: I20260128 01:36:43.731244 1599 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:36:43.746133 locksmithd[1656]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 28 01:36:43.771870 update_engine[1599]: E20260128 01:36:43.771798 1599 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 01:36:43.772181 update_engine[1599]: I20260128 01:36:43.772146 1599 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 28 01:36:43.772425 update_engine[1599]: I20260128 01:36:43.772397 1599 omaha_request_action.cc:617] Omaha request response: Jan 28 01:36:43.774920 update_engine[1599]: I20260128 01:36:43.774889 1599 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 01:36:43.775010 update_engine[1599]: I20260128 01:36:43.774989 1599 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 01:36:43.775074 update_engine[1599]: I20260128 01:36:43.775058 1599 update_attempter.cc:306] Processing Done. Jan 28 01:36:43.775212 update_engine[1599]: I20260128 01:36:43.775190 1599 update_attempter.cc:310] Error event sent. Jan 28 01:36:43.782151 update_engine[1599]: I20260128 01:36:43.775417 1599 update_check_scheduler.cc:74] Next update check in 47m21s Jan 28 01:36:43.792708 locksmithd[1656]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 28 01:36:43.915075 kubelet[2938]: I0128 01:36:43.914955 2938 scope.go:117] "RemoveContainer" containerID="bca85ac4f8f009994a35d6e89f6ce4ac782c6cec4fd3bb8329adfd0bbf299d83" Jan 28 01:36:43.920820 kubelet[2938]: E0128 01:36:43.920757 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:44.004065 kubelet[2938]: I0128 01:36:44.001878 2938 scope.go:117] "RemoveContainer" containerID="7818e0f0981fabe3c589b306b9f44d735eba4dee07167d9aadb3ef62f716a2c6" Jan 28 01:36:44.034382 kubelet[2938]: E0128 01:36:44.026232 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:44.084397 containerd[1612]: time="2026-01-28T01:36:44.076821260Z" level=info msg="CreateContainer within sandbox \"646a081358b454bfc4ed890ccfcbe5ac5ad7f82c77034df4b4ca181abd77a0c1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 28 01:36:44.130248 containerd[1612]: time="2026-01-28T01:36:44.122633409Z" level=info msg="CreateContainer within sandbox \"a56bd6992403f89973ada7ad3355f99bf77cb3f3034dd5fa7cff9dcc493606df\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 28 01:36:44.407686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1885028006.mount: Deactivated successfully. Jan 28 01:36:44.455679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3768444302.mount: Deactivated successfully. Jan 28 01:36:44.520940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1775031743.mount: Deactivated successfully. Jan 28 01:36:44.558373 containerd[1612]: time="2026-01-28T01:36:44.557832921Z" level=info msg="Container 0a8117195d9fae8d7be32af18157ae5c0e157949452e454c995b8da1f24b42cb: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:36:44.653716 containerd[1612]: time="2026-01-28T01:36:44.653600972Z" level=info msg="Container 714839389731feaa100ba85d309cb53c753aa71e6cfb737ef244adf739af06d5: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:36:44.913632 containerd[1612]: time="2026-01-28T01:36:44.913215818Z" level=info msg="CreateContainer within sandbox \"646a081358b454bfc4ed890ccfcbe5ac5ad7f82c77034df4b4ca181abd77a0c1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0a8117195d9fae8d7be32af18157ae5c0e157949452e454c995b8da1f24b42cb\"" Jan 28 01:36:44.953780 containerd[1612]: time="2026-01-28T01:36:44.953722555Z" level=info msg="StartContainer for \"0a8117195d9fae8d7be32af18157ae5c0e157949452e454c995b8da1f24b42cb\"" Jan 28 01:36:44.972159 containerd[1612]: time="2026-01-28T01:36:44.972103925Z" level=info msg="connecting to shim 0a8117195d9fae8d7be32af18157ae5c0e157949452e454c995b8da1f24b42cb" address="unix:///run/containerd/s/9f1abb83d31b9d22e8e0e3b7999dce2fc512d5c9d1c633bf2f7c52ea2b0b2d95" protocol=ttrpc version=3 Jan 28 01:36:44.987080 containerd[1612]: time="2026-01-28T01:36:44.986997638Z" level=info msg="CreateContainer within sandbox \"a56bd6992403f89973ada7ad3355f99bf77cb3f3034dd5fa7cff9dcc493606df\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"714839389731feaa100ba85d309cb53c753aa71e6cfb737ef244adf739af06d5\"" Jan 28 01:36:44.994120 containerd[1612]: time="2026-01-28T01:36:44.994012459Z" level=info msg="StartContainer for \"714839389731feaa100ba85d309cb53c753aa71e6cfb737ef244adf739af06d5\"" Jan 28 01:36:45.003375 containerd[1612]: time="2026-01-28T01:36:45.000958350Z" level=info msg="connecting to shim 714839389731feaa100ba85d309cb53c753aa71e6cfb737ef244adf739af06d5" address="unix:///run/containerd/s/0cf8acde16ffb195c6f80d9b444f4d007e850459187eab2f54d297c8d217061b" protocol=ttrpc version=3 Jan 28 01:36:45.167199 systemd[1]: Started cri-containerd-714839389731feaa100ba85d309cb53c753aa71e6cfb737ef244adf739af06d5.scope - libcontainer container 714839389731feaa100ba85d309cb53c753aa71e6cfb737ef244adf739af06d5. Jan 28 01:36:45.362976 systemd[1]: Started cri-containerd-0a8117195d9fae8d7be32af18157ae5c0e157949452e454c995b8da1f24b42cb.scope - libcontainer container 0a8117195d9fae8d7be32af18157ae5c0e157949452e454c995b8da1f24b42cb. Jan 28 01:36:45.384000 audit: BPF prog-id=153 op=LOAD Jan 28 01:36:45.402537 kernel: audit: type=1334 audit(1769564205.384:551): prog-id=153 op=LOAD Jan 28 01:36:45.406000 audit: BPF prog-id=154 op=LOAD Jan 28 01:36:45.424845 kernel: audit: type=1334 audit(1769564205.406:552): prog-id=154 op=LOAD Jan 28 01:36:45.406000 audit[3398]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2611 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:45.482424 kernel: audit: type=1300 audit(1769564205.406:552): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2611 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:45.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731343833393338393733316665616131303062613835643330396362 Jan 28 01:36:45.526756 kernel: audit: type=1327 audit(1769564205.406:552): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731343833393338393733316665616131303062613835643330396362 Jan 28 01:36:45.406000 audit: BPF prog-id=154 op=UNLOAD Jan 28 01:36:45.544762 kernel: audit: type=1334 audit(1769564205.406:553): prog-id=154 op=UNLOAD Jan 28 01:36:45.544906 kernel: audit: type=1300 audit(1769564205.406:553): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:45.406000 audit[3398]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:45.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731343833393338393733316665616131303062613835643330396362 Jan 28 01:36:45.423000 audit: BPF prog-id=155 op=LOAD Jan 28 01:36:45.423000 audit[3398]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2611 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:45.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731343833393338393733316665616131303062613835643330396362 Jan 28 01:36:45.423000 audit: BPF prog-id=156 op=LOAD Jan 28 01:36:45.423000 audit[3398]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2611 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:45.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731343833393338393733316665616131303062613835643330396362 Jan 28 01:36:45.424000 audit: BPF prog-id=156 op=UNLOAD Jan 28 01:36:45.424000 audit[3398]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:45.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731343833393338393733316665616131303062613835643330396362 Jan 28 01:36:45.424000 audit: BPF prog-id=155 op=UNLOAD Jan 28 01:36:45.424000 audit[3398]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:45.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731343833393338393733316665616131303062613835643330396362 Jan 28 01:36:45.424000 audit: BPF prog-id=157 op=LOAD Jan 28 01:36:45.424000 audit[3398]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2611 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:45.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731343833393338393733316665616131303062613835643330396362 Jan 28 01:36:45.669000 audit: BPF prog-id=158 op=LOAD Jan 28 01:36:45.673000 audit: BPF prog-id=159 op=LOAD Jan 28 01:36:45.673000 audit[3397]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2638 pid=3397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:45.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061383131373139356439666165386437626533326166313831353761 Jan 28 01:36:45.673000 audit: BPF prog-id=159 op=UNLOAD Jan 28 01:36:45.673000 audit[3397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=3397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:45.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061383131373139356439666165386437626533326166313831353761 Jan 28 01:36:45.673000 audit: BPF prog-id=160 op=LOAD Jan 28 01:36:45.673000 audit[3397]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2638 pid=3397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:45.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061383131373139356439666165386437626533326166313831353761 Jan 28 01:36:45.673000 audit: BPF prog-id=161 op=LOAD Jan 28 01:36:45.673000 audit[3397]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2638 pid=3397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:45.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061383131373139356439666165386437626533326166313831353761 Jan 28 01:36:45.673000 audit: BPF prog-id=161 op=UNLOAD Jan 28 01:36:45.673000 audit[3397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=3397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:45.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061383131373139356439666165386437626533326166313831353761 Jan 28 01:36:45.673000 audit: BPF prog-id=160 op=UNLOAD Jan 28 01:36:45.673000 audit[3397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=3397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:45.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061383131373139356439666165386437626533326166313831353761 Jan 28 01:36:45.673000 audit: BPF prog-id=162 op=LOAD Jan 28 01:36:45.673000 audit[3397]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2638 pid=3397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:45.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061383131373139356439666165386437626533326166313831353761 Jan 28 01:36:46.090111 containerd[1612]: time="2026-01-28T01:36:46.087158467Z" level=info msg="StartContainer for \"714839389731feaa100ba85d309cb53c753aa71e6cfb737ef244adf739af06d5\" returns successfully" Jan 28 01:36:46.092556 containerd[1612]: time="2026-01-28T01:36:46.090765395Z" level=info msg="StartContainer for \"0a8117195d9fae8d7be32af18157ae5c0e157949452e454c995b8da1f24b42cb\" returns successfully" Jan 28 01:36:47.208904 kubelet[2938]: E0128 01:36:47.208858 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:47.212959 kubelet[2938]: E0128 01:36:47.212880 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:48.325691 kubelet[2938]: E0128 01:36:48.313404 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:49.331539 kubelet[2938]: E0128 01:36:49.331225 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:55.497130 kubelet[2938]: E0128 01:36:55.496016 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:57.881812 kubelet[2938]: E0128 01:36:57.877460 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:58.416000 audit[3468]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3468 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:36:58.469417 kernel: kauditd_printk_skb: 38 callbacks suppressed Jan 28 01:36:58.471058 kernel: audit: type=1325 audit(1769564218.416:567): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3468 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:36:58.416000 audit[3468]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff88c7e2e0 a2=0 a3=7fff88c7e2cc items=0 ppid=3042 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:58.599496 kernel: audit: type=1300 audit(1769564218.416:567): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff88c7e2e0 a2=0 a3=7fff88c7e2cc items=0 ppid=3042 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:58.416000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:36:58.656379 kernel: audit: type=1327 audit(1769564218.416:567): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:36:58.656810 kernel: audit: type=1325 audit(1769564218.546:568): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3468 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:36:58.546000 audit[3468]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3468 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:36:58.668395 kubelet[2938]: E0128 01:36:58.667476 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:58.546000 audit[3468]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff88c7e2e0 a2=0 a3=0 items=0 ppid=3042 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:58.766185 kernel: audit: type=1300 audit(1769564218.546:568): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff88c7e2e0 a2=0 a3=0 items=0 ppid=3042 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:58.800425 kernel: audit: type=1327 audit(1769564218.546:568): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:36:58.546000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:36:58.821872 kernel: audit: type=1325 audit(1769564218.749:569): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3470 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:36:58.749000 audit[3470]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3470 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:36:58.935053 kernel: audit: type=1300 audit(1769564218.749:569): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffecddff890 a2=0 a3=7ffecddff87c items=0 ppid=3042 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:58.749000 audit[3470]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffecddff890 a2=0 a3=7ffecddff87c items=0 ppid=3042 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:58.749000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:36:58.991240 kernel: audit: type=1327 audit(1769564218.749:569): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:36:58.991447 kernel: audit: type=1325 audit(1769564218.877:570): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3470 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:36:58.877000 audit[3470]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3470 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:36:58.877000 audit[3470]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffecddff890 a2=0 a3=0 items=0 ppid=3042 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:58.877000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:37:05.612541 kubelet[2938]: E0128 01:37:05.611637 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:37:12.420706 kubelet[2938]: E0128 01:37:12.403798 2938 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Jan 28 01:37:12.694925 kubelet[2938]: E0128 01:37:12.690093 2938 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.191s" Jan 28 01:37:17.498238 kubelet[2938]: E0128 01:37:17.423223 2938 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:37:19.011000 audit[3475]: NETFILTER_CFG table=filter:113 family=2 entries=20 op=nft_register_rule pid=3475 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:37:19.132386 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 28 01:37:19.170973 kernel: audit: type=1325 audit(1769564239.011:571): table=filter:113 family=2 entries=20 op=nft_register_rule pid=3475 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:37:19.175061 kernel: audit: type=1300 audit(1769564239.011:571): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd7750e170 a2=0 a3=7ffd7750e15c items=0 ppid=3042 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:19.175128 kernel: audit: type=1327 audit(1769564239.011:571): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:37:19.011000 audit[3475]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd7750e170 a2=0 a3=7ffd7750e15c items=0 ppid=3042 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:19.011000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:37:19.221899 kernel: audit: type=1325 audit(1769564239.195:572): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3475 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:37:19.222067 kernel: audit: type=1300 audit(1769564239.195:572): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd7750e170 a2=0 a3=0 items=0 ppid=3042 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:19.195000 audit[3475]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3475 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:37:19.195000 audit[3475]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd7750e170 a2=0 a3=0 items=0 ppid=3042 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:19.195000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:37:19.299816 kernel: audit: type=1327 audit(1769564239.195:572): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:37:19.418000 audit[3477]: NETFILTER_CFG table=filter:115 family=2 entries=20 op=nft_register_rule pid=3477 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:37:19.433888 kernel: audit: type=1325 audit(1769564239.418:573): table=filter:115 family=2 entries=20 op=nft_register_rule pid=3477 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:37:19.434088 kernel: audit: type=1300 audit(1769564239.418:573): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffdf1e5890 a2=0 a3=7fffdf1e587c items=0 ppid=3042 pid=3477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:19.418000 audit[3477]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffdf1e5890 a2=0 a3=7fffdf1e587c items=0 ppid=3042 pid=3477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:19.418000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:37:19.518814 kernel: audit: type=1327 audit(1769564239.418:573): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:37:19.518000 audit[3477]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3477 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:37:19.518000 audit[3477]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffdf1e5890 a2=0 a3=0 items=0 ppid=3042 pid=3477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:19.518000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:37:19.536373 kernel: audit: type=1325 audit(1769564239.518:574): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3477 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:37:20.363000 audit[3479]: NETFILTER_CFG table=filter:117 family=2 entries=20 op=nft_register_rule pid=3479 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:37:20.363000 audit[3479]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd0aee04d0 a2=0 a3=7ffd0aee04bc items=0 ppid=3042 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:20.363000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:37:20.400000 audit[3479]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3479 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:37:20.400000 audit[3479]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd0aee04d0 a2=0 a3=0 items=0 ppid=3042 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:20.400000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:37:20.823527 systemd[1]: Created slice kubepods-besteffort-pod66bc5e9a_0202_4552_b4d5_082d38c96997.slice - libcontainer container kubepods-besteffort-pod66bc5e9a_0202_4552_b4d5_082d38c96997.slice. Jan 28 01:37:20.899998 kubelet[2938]: I0128 01:37:20.898914 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66bc5e9a-0202-4552-b4d5-082d38c96997-tigera-ca-bundle\") pod \"calico-typha-7b58f5b9cb-gk8gr\" (UID: \"66bc5e9a-0202-4552-b4d5-082d38c96997\") " pod="calico-system/calico-typha-7b58f5b9cb-gk8gr" Jan 28 01:37:20.899998 kubelet[2938]: I0128 01:37:20.899017 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/66bc5e9a-0202-4552-b4d5-082d38c96997-typha-certs\") pod \"calico-typha-7b58f5b9cb-gk8gr\" (UID: \"66bc5e9a-0202-4552-b4d5-082d38c96997\") " pod="calico-system/calico-typha-7b58f5b9cb-gk8gr" Jan 28 01:37:20.899998 kubelet[2938]: I0128 01:37:20.899051 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ztmj\" (UniqueName: \"kubernetes.io/projected/66bc5e9a-0202-4552-b4d5-082d38c96997-kube-api-access-2ztmj\") pod \"calico-typha-7b58f5b9cb-gk8gr\" (UID: \"66bc5e9a-0202-4552-b4d5-082d38c96997\") " pod="calico-system/calico-typha-7b58f5b9cb-gk8gr" Jan 28 01:37:21.087000 audit[3482]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:37:21.087000 audit[3482]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd2f5c2620 a2=0 a3=7ffd2f5c260c items=0 ppid=3042 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:21.087000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:37:21.144000 audit[3482]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=3482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:37:21.144000 audit[3482]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd2f5c2620 a2=0 a3=0 items=0 ppid=3042 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:21.144000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:37:21.454527 kubelet[2938]: E0128 01:37:21.454478 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:37:21.469045 containerd[1612]: time="2026-01-28T01:37:21.468930864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b58f5b9cb-gk8gr,Uid:66bc5e9a-0202-4552-b4d5-082d38c96997,Namespace:calico-system,Attempt:0,}" Jan 28 01:37:21.802852 kubelet[2938]: W0128 01:37:21.798830 2938 reflector.go:569] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jan 28 01:37:21.802852 kubelet[2938]: E0128 01:37:21.799092 2938 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"node-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 28 01:37:21.842975 systemd[1]: Created slice kubepods-besteffort-pod9771d8f7_48ae_40fd_a423_9de59c68df27.slice - libcontainer container kubepods-besteffort-pod9771d8f7_48ae_40fd_a423_9de59c68df27.slice. Jan 28 01:37:21.875730 containerd[1612]: time="2026-01-28T01:37:21.875598305Z" level=info msg="connecting to shim ed53d82c8ecd012d99aa538e3d742078e8ce092d8470b22ef7ced7e65a93468e" address="unix:///run/containerd/s/06060426e9c32a068382867dba64c95e12aac90d6252d8fcc0fce4973317c0f4" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:37:21.959882 kubelet[2938]: I0128 01:37:21.957854 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9771d8f7-48ae-40fd-a423-9de59c68df27-lib-modules\") pod \"calico-node-dwj7n\" (UID: \"9771d8f7-48ae-40fd-a423-9de59c68df27\") " pod="calico-system/calico-node-dwj7n" Jan 28 01:37:21.963822 kubelet[2938]: I0128 01:37:21.961970 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9771d8f7-48ae-40fd-a423-9de59c68df27-var-lib-calico\") pod \"calico-node-dwj7n\" (UID: \"9771d8f7-48ae-40fd-a423-9de59c68df27\") " pod="calico-system/calico-node-dwj7n" Jan 28 01:37:21.963822 kubelet[2938]: I0128 01:37:21.962017 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9771d8f7-48ae-40fd-a423-9de59c68df27-cni-log-dir\") pod \"calico-node-dwj7n\" (UID: \"9771d8f7-48ae-40fd-a423-9de59c68df27\") " pod="calico-system/calico-node-dwj7n" Jan 28 01:37:21.963822 kubelet[2938]: I0128 01:37:21.962041 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9771d8f7-48ae-40fd-a423-9de59c68df27-flexvol-driver-host\") pod \"calico-node-dwj7n\" (UID: \"9771d8f7-48ae-40fd-a423-9de59c68df27\") " pod="calico-system/calico-node-dwj7n" Jan 28 01:37:21.963822 kubelet[2938]: I0128 01:37:21.962057 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9771d8f7-48ae-40fd-a423-9de59c68df27-xtables-lock\") pod \"calico-node-dwj7n\" (UID: \"9771d8f7-48ae-40fd-a423-9de59c68df27\") " pod="calico-system/calico-node-dwj7n" Jan 28 01:37:21.963822 kubelet[2938]: I0128 01:37:21.962072 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9771d8f7-48ae-40fd-a423-9de59c68df27-cni-bin-dir\") pod \"calico-node-dwj7n\" (UID: \"9771d8f7-48ae-40fd-a423-9de59c68df27\") " pod="calico-system/calico-node-dwj7n" Jan 28 01:37:21.964078 kubelet[2938]: I0128 01:37:21.962085 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9771d8f7-48ae-40fd-a423-9de59c68df27-cni-net-dir\") pod \"calico-node-dwj7n\" (UID: \"9771d8f7-48ae-40fd-a423-9de59c68df27\") " pod="calico-system/calico-node-dwj7n" Jan 28 01:37:21.964078 kubelet[2938]: I0128 01:37:21.962098 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9771d8f7-48ae-40fd-a423-9de59c68df27-var-run-calico\") pod \"calico-node-dwj7n\" (UID: \"9771d8f7-48ae-40fd-a423-9de59c68df27\") " pod="calico-system/calico-node-dwj7n" Jan 28 01:37:21.964078 kubelet[2938]: I0128 01:37:21.962111 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c75kv\" (UniqueName: \"kubernetes.io/projected/9771d8f7-48ae-40fd-a423-9de59c68df27-kube-api-access-c75kv\") pod \"calico-node-dwj7n\" (UID: \"9771d8f7-48ae-40fd-a423-9de59c68df27\") " pod="calico-system/calico-node-dwj7n" Jan 28 01:37:21.964078 kubelet[2938]: I0128 01:37:21.962128 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9771d8f7-48ae-40fd-a423-9de59c68df27-node-certs\") pod \"calico-node-dwj7n\" (UID: \"9771d8f7-48ae-40fd-a423-9de59c68df27\") " pod="calico-system/calico-node-dwj7n" Jan 28 01:37:21.964078 kubelet[2938]: I0128 01:37:21.962141 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9771d8f7-48ae-40fd-a423-9de59c68df27-policysync\") pod \"calico-node-dwj7n\" (UID: \"9771d8f7-48ae-40fd-a423-9de59c68df27\") " pod="calico-system/calico-node-dwj7n" Jan 28 01:37:21.966826 kubelet[2938]: I0128 01:37:21.962155 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9771d8f7-48ae-40fd-a423-9de59c68df27-tigera-ca-bundle\") pod \"calico-node-dwj7n\" (UID: \"9771d8f7-48ae-40fd-a423-9de59c68df27\") " pod="calico-system/calico-node-dwj7n" Jan 28 01:37:22.052759 kubelet[2938]: E0128 01:37:22.052475 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:37:22.079543 kubelet[2938]: E0128 01:37:22.079412 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.082130 kubelet[2938]: W0128 01:37:22.081433 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.087198 kubelet[2938]: E0128 01:37:22.086145 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.089692 kubelet[2938]: E0128 01:37:22.089572 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.089798 kubelet[2938]: W0128 01:37:22.089692 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.089798 kubelet[2938]: E0128 01:37:22.089791 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.093789 kubelet[2938]: E0128 01:37:22.093398 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.093789 kubelet[2938]: W0128 01:37:22.093418 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.103391 kubelet[2938]: E0128 01:37:22.102875 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.113005 kubelet[2938]: E0128 01:37:22.112692 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.113005 kubelet[2938]: W0128 01:37:22.112726 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.114712 kubelet[2938]: E0128 01:37:22.114574 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.117368 kubelet[2938]: E0128 01:37:22.116391 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.118510 kubelet[2938]: W0128 01:37:22.117816 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.118605 kubelet[2938]: E0128 01:37:22.118532 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.123158 kubelet[2938]: E0128 01:37:22.123101 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.123158 kubelet[2938]: W0128 01:37:22.123126 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.125128 kubelet[2938]: E0128 01:37:22.124764 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.140517 kubelet[2938]: E0128 01:37:22.139216 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.140517 kubelet[2938]: W0128 01:37:22.139249 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.142703 kubelet[2938]: E0128 01:37:22.141690 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.142703 kubelet[2938]: E0128 01:37:22.142363 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.142703 kubelet[2938]: W0128 01:37:22.142380 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.142885 kubelet[2938]: E0128 01:37:22.142709 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.142885 kubelet[2938]: W0128 01:37:22.142721 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.144155 kubelet[2938]: E0128 01:37:22.143191 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.144155 kubelet[2938]: W0128 01:37:22.143203 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.144155 kubelet[2938]: E0128 01:37:22.143476 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.144155 kubelet[2938]: E0128 01:37:22.143508 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.144155 kubelet[2938]: E0128 01:37:22.143528 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.144155 kubelet[2938]: E0128 01:37:22.143803 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.144155 kubelet[2938]: W0128 01:37:22.143817 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.144155 kubelet[2938]: E0128 01:37:22.144157 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.149427 kubelet[2938]: W0128 01:37:22.144171 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.149427 kubelet[2938]: E0128 01:37:22.144558 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.149427 kubelet[2938]: W0128 01:37:22.144570 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.149427 kubelet[2938]: E0128 01:37:22.144878 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.149427 kubelet[2938]: W0128 01:37:22.144890 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.149427 kubelet[2938]: E0128 01:37:22.145239 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.149427 kubelet[2938]: W0128 01:37:22.145250 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.149427 kubelet[2938]: E0128 01:37:22.145404 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.149427 kubelet[2938]: E0128 01:37:22.145444 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.149427 kubelet[2938]: E0128 01:37:22.145482 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.149866 kubelet[2938]: E0128 01:37:22.145499 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.149866 kubelet[2938]: E0128 01:37:22.145513 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.149866 kubelet[2938]: E0128 01:37:22.145752 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.149866 kubelet[2938]: W0128 01:37:22.145765 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.149866 kubelet[2938]: E0128 01:37:22.145985 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.149866 kubelet[2938]: W0128 01:37:22.145995 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.149866 kubelet[2938]: E0128 01:37:22.146205 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.149866 kubelet[2938]: W0128 01:37:22.146217 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.149866 kubelet[2938]: E0128 01:37:22.146407 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.149866 kubelet[2938]: E0128 01:37:22.146432 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.150221 kubelet[2938]: E0128 01:37:22.146454 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.150221 kubelet[2938]: E0128 01:37:22.146589 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.150221 kubelet[2938]: W0128 01:37:22.146602 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.152510 kubelet[2938]: E0128 01:37:22.152470 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.153690 kubelet[2938]: E0128 01:37:22.153579 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.153749 kubelet[2938]: W0128 01:37:22.153690 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.154181 kubelet[2938]: E0128 01:37:22.153878 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.159537 kubelet[2938]: E0128 01:37:22.159448 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.159537 kubelet[2938]: W0128 01:37:22.159527 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.161365 kubelet[2938]: E0128 01:37:22.159808 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.164921 kubelet[2938]: E0128 01:37:22.164782 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.164921 kubelet[2938]: W0128 01:37:22.164863 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.165168 kubelet[2938]: E0128 01:37:22.165073 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.174843 kubelet[2938]: E0128 01:37:22.173571 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.174843 kubelet[2938]: W0128 01:37:22.173594 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.174843 kubelet[2938]: E0128 01:37:22.173757 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.187171 kubelet[2938]: E0128 01:37:22.186995 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.187171 kubelet[2938]: W0128 01:37:22.187017 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.188142 kubelet[2938]: E0128 01:37:22.187598 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.200439 kubelet[2938]: E0128 01:37:22.196422 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.200439 kubelet[2938]: W0128 01:37:22.196448 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.200439 kubelet[2938]: E0128 01:37:22.196544 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.214200 kubelet[2938]: E0128 01:37:22.211242 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.214200 kubelet[2938]: W0128 01:37:22.211449 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.214200 kubelet[2938]: E0128 01:37:22.211720 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.234049 kubelet[2938]: E0128 01:37:22.230210 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.234049 kubelet[2938]: W0128 01:37:22.232248 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.234049 kubelet[2938]: E0128 01:37:22.233159 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.242872 systemd[1]: Started cri-containerd-ed53d82c8ecd012d99aa538e3d742078e8ce092d8470b22ef7ced7e65a93468e.scope - libcontainer container ed53d82c8ecd012d99aa538e3d742078e8ce092d8470b22ef7ced7e65a93468e. Jan 28 01:37:22.258862 kubelet[2938]: E0128 01:37:22.256692 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.258862 kubelet[2938]: W0128 01:37:22.256779 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.264817 kubelet[2938]: E0128 01:37:22.258246 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.264817 kubelet[2938]: W0128 01:37:22.262595 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.264817 kubelet[2938]: E0128 01:37:22.263092 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.264817 kubelet[2938]: E0128 01:37:22.264188 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.270947 kubelet[2938]: E0128 01:37:22.267471 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.270947 kubelet[2938]: W0128 01:37:22.267489 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.270947 kubelet[2938]: E0128 01:37:22.267939 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.285711 kubelet[2938]: E0128 01:37:22.279400 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.285711 kubelet[2938]: W0128 01:37:22.279431 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.285711 kubelet[2938]: E0128 01:37:22.285442 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.285711 kubelet[2938]: I0128 01:37:22.285498 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef-kubelet-dir\") pod \"csi-node-driver-4245v\" (UID: \"a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef\") " pod="calico-system/csi-node-driver-4245v" Jan 28 01:37:22.305816 kubelet[2938]: E0128 01:37:22.292198 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.305816 kubelet[2938]: W0128 01:37:22.292229 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.305816 kubelet[2938]: E0128 01:37:22.292704 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.305816 kubelet[2938]: E0128 01:37:22.293092 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.305816 kubelet[2938]: W0128 01:37:22.293110 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.305816 kubelet[2938]: E0128 01:37:22.293731 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.305816 kubelet[2938]: W0128 01:37:22.293745 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.305816 kubelet[2938]: E0128 01:37:22.293982 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.305816 kubelet[2938]: W0128 01:37:22.293993 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.305816 kubelet[2938]: E0128 01:37:22.294215 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.306422 kubelet[2938]: W0128 01:37:22.294225 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.306422 kubelet[2938]: E0128 01:37:22.294599 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.306422 kubelet[2938]: W0128 01:37:22.294682 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.306422 kubelet[2938]: E0128 01:37:22.294934 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.306422 kubelet[2938]: W0128 01:37:22.294944 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.306422 kubelet[2938]: E0128 01:37:22.295175 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.306422 kubelet[2938]: W0128 01:37:22.295187 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.306422 kubelet[2938]: E0128 01:37:22.295571 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.306422 kubelet[2938]: W0128 01:37:22.295581 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.306422 kubelet[2938]: E0128 01:37:22.295887 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.306842 kubelet[2938]: W0128 01:37:22.295902 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.306842 kubelet[2938]: E0128 01:37:22.295918 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.306842 kubelet[2938]: E0128 01:37:22.296132 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.306842 kubelet[2938]: W0128 01:37:22.296141 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.306842 kubelet[2938]: E0128 01:37:22.296156 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.306842 kubelet[2938]: E0128 01:37:22.296180 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.306842 kubelet[2938]: E0128 01:37:22.296202 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.306842 kubelet[2938]: E0128 01:37:22.296215 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.306842 kubelet[2938]: E0128 01:37:22.296227 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.306842 kubelet[2938]: E0128 01:37:22.296240 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.307183 kubelet[2938]: E0128 01:37:22.300051 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.307183 kubelet[2938]: E0128 01:37:22.300093 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.307183 kubelet[2938]: I0128 01:37:22.300123 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef-socket-dir\") pod \"csi-node-driver-4245v\" (UID: \"a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef\") " pod="calico-system/csi-node-driver-4245v" Jan 28 01:37:22.307183 kubelet[2938]: E0128 01:37:22.300552 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.307183 kubelet[2938]: W0128 01:37:22.300570 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.307183 kubelet[2938]: E0128 01:37:22.300585 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.307183 kubelet[2938]: I0128 01:37:22.300607 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef-varrun\") pod \"csi-node-driver-4245v\" (UID: \"a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef\") " pod="calico-system/csi-node-driver-4245v" Jan 28 01:37:22.307183 kubelet[2938]: E0128 01:37:22.300921 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.307183 kubelet[2938]: W0128 01:37:22.300934 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.307712 kubelet[2938]: E0128 01:37:22.300947 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.307712 kubelet[2938]: I0128 01:37:22.300972 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwnhh\" (UniqueName: \"kubernetes.io/projected/a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef-kube-api-access-qwnhh\") pod \"csi-node-driver-4245v\" (UID: \"a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef\") " pod="calico-system/csi-node-driver-4245v" Jan 28 01:37:22.307712 kubelet[2938]: E0128 01:37:22.301216 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.307712 kubelet[2938]: W0128 01:37:22.301230 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.307712 kubelet[2938]: E0128 01:37:22.301244 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.307712 kubelet[2938]: I0128 01:37:22.301400 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef-registration-dir\") pod \"csi-node-driver-4245v\" (UID: \"a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef\") " pod="calico-system/csi-node-driver-4245v" Jan 28 01:37:22.307712 kubelet[2938]: E0128 01:37:22.301815 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.307712 kubelet[2938]: W0128 01:37:22.301833 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.307945 kubelet[2938]: E0128 01:37:22.301845 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.314949 kubelet[2938]: E0128 01:37:22.302558 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.314949 kubelet[2938]: E0128 01:37:22.309822 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.314949 kubelet[2938]: W0128 01:37:22.309841 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.314949 kubelet[2938]: E0128 01:37:22.309864 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.342817 kubelet[2938]: E0128 01:37:22.331997 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.342817 kubelet[2938]: W0128 01:37:22.332090 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.342817 kubelet[2938]: E0128 01:37:22.332124 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.342817 kubelet[2938]: E0128 01:37:22.334398 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.342817 kubelet[2938]: W0128 01:37:22.334414 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.342817 kubelet[2938]: E0128 01:37:22.334532 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.342817 kubelet[2938]: E0128 01:37:22.338831 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.342817 kubelet[2938]: W0128 01:37:22.338848 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.342817 kubelet[2938]: E0128 01:37:22.338953 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.342817 kubelet[2938]: E0128 01:37:22.342036 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.343817 kubelet[2938]: W0128 01:37:22.342050 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.343817 kubelet[2938]: E0128 01:37:22.342082 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.343817 kubelet[2938]: E0128 01:37:22.342496 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.343817 kubelet[2938]: W0128 01:37:22.342509 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.343817 kubelet[2938]: E0128 01:37:22.342526 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.348368 kubelet[2938]: E0128 01:37:22.345432 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.348368 kubelet[2938]: W0128 01:37:22.345493 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.348368 kubelet[2938]: E0128 01:37:22.345511 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.422008 kubelet[2938]: E0128 01:37:22.421497 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.422008 kubelet[2938]: W0128 01:37:22.421525 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.422008 kubelet[2938]: E0128 01:37:22.421553 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.429852 kubelet[2938]: E0128 01:37:22.427160 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.429852 kubelet[2938]: W0128 01:37:22.427248 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.429852 kubelet[2938]: E0128 01:37:22.427396 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.429852 kubelet[2938]: E0128 01:37:22.427835 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.429852 kubelet[2938]: W0128 01:37:22.427847 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.429852 kubelet[2938]: E0128 01:37:22.427864 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.429852 kubelet[2938]: E0128 01:37:22.428151 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.429852 kubelet[2938]: W0128 01:37:22.428163 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.429852 kubelet[2938]: E0128 01:37:22.428177 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.429852 kubelet[2938]: E0128 01:37:22.428865 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.430497 kubelet[2938]: W0128 01:37:22.428880 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.430497 kubelet[2938]: E0128 01:37:22.428900 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.430497 kubelet[2938]: E0128 01:37:22.429538 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.430497 kubelet[2938]: W0128 01:37:22.429550 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.430497 kubelet[2938]: E0128 01:37:22.429565 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.430497 kubelet[2938]: E0128 01:37:22.430183 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.430497 kubelet[2938]: W0128 01:37:22.430195 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.430497 kubelet[2938]: E0128 01:37:22.430230 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.432016 kubelet[2938]: E0128 01:37:22.430897 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.432016 kubelet[2938]: W0128 01:37:22.430916 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.432016 kubelet[2938]: E0128 01:37:22.431541 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.432016 kubelet[2938]: E0128 01:37:22.431767 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.432016 kubelet[2938]: W0128 01:37:22.431779 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.432204 kubelet[2938]: E0128 01:37:22.432186 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.442403 kubelet[2938]: E0128 01:37:22.432418 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.442403 kubelet[2938]: W0128 01:37:22.441739 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.446401 kubelet[2938]: E0128 01:37:22.444203 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.446401 kubelet[2938]: E0128 01:37:22.444407 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.446401 kubelet[2938]: W0128 01:37:22.444577 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.446401 kubelet[2938]: E0128 01:37:22.444599 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.457687 kubelet[2938]: E0128 01:37:22.452523 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.457687 kubelet[2938]: W0128 01:37:22.452548 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.470079 kubelet[2938]: E0128 01:37:22.463576 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.470079 kubelet[2938]: W0128 01:37:22.463697 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.470079 kubelet[2938]: E0128 01:37:22.463729 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.470079 kubelet[2938]: E0128 01:37:22.468735 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.470079 kubelet[2938]: W0128 01:37:22.468753 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.470079 kubelet[2938]: E0128 01:37:22.468777 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.470079 kubelet[2938]: E0128 01:37:22.469572 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.470079 kubelet[2938]: W0128 01:37:22.469591 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.470079 kubelet[2938]: E0128 01:37:22.469682 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.470079 kubelet[2938]: E0128 01:37:22.470009 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.470955 kubelet[2938]: W0128 01:37:22.470023 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.470955 kubelet[2938]: E0128 01:37:22.470043 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.470955 kubelet[2938]: E0128 01:37:22.470907 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.476221 kubelet[2938]: W0128 01:37:22.470920 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.476221 kubelet[2938]: E0128 01:37:22.472134 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.476221 kubelet[2938]: E0128 01:37:22.473692 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.476221 kubelet[2938]: E0128 01:37:22.474873 2938 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:37:22.480488 kubelet[2938]: E0128 01:37:22.480460 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.481221 kubelet[2938]: W0128 01:37:22.480849 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.481221 kubelet[2938]: E0128 01:37:22.480984 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.490500 kubelet[2938]: E0128 01:37:22.490462 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.490794 kubelet[2938]: W0128 01:37:22.490767 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.491733 kubelet[2938]: E0128 01:37:22.491182 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.499786 kubelet[2938]: E0128 01:37:22.495954 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.499786 kubelet[2938]: W0128 01:37:22.495976 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.499786 kubelet[2938]: E0128 01:37:22.496070 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.499786 kubelet[2938]: E0128 01:37:22.496460 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.499786 kubelet[2938]: W0128 01:37:22.496475 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.499786 kubelet[2938]: E0128 01:37:22.496810 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.499786 kubelet[2938]: E0128 01:37:22.497090 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.499786 kubelet[2938]: W0128 01:37:22.497102 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.499786 kubelet[2938]: E0128 01:37:22.497333 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.499786 kubelet[2938]: E0128 01:37:22.497805 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.500144 kubelet[2938]: W0128 01:37:22.497817 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.500144 kubelet[2938]: E0128 01:37:22.498038 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.500144 kubelet[2938]: E0128 01:37:22.498702 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.500144 kubelet[2938]: W0128 01:37:22.498716 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.500144 kubelet[2938]: E0128 01:37:22.498733 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.500144 kubelet[2938]: E0128 01:37:22.499086 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.500144 kubelet[2938]: W0128 01:37:22.499100 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.500144 kubelet[2938]: E0128 01:37:22.499112 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.556101 kubelet[2938]: E0128 01:37:22.553438 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.556101 kubelet[2938]: W0128 01:37:22.553521 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.556101 kubelet[2938]: E0128 01:37:22.553556 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.670890 kubelet[2938]: E0128 01:37:22.670035 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:22.670890 kubelet[2938]: W0128 01:37:22.670123 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:22.670890 kubelet[2938]: E0128 01:37:22.670157 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:22.735000 audit: BPF prog-id=163 op=LOAD Jan 28 01:37:22.735000 audit: BPF prog-id=164 op=LOAD Jan 28 01:37:22.735000 audit[3503]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3492 pid=3503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:22.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564353364383263386563643031326439396161353338653364373432 Jan 28 01:37:22.735000 audit: BPF prog-id=164 op=UNLOAD Jan 28 01:37:22.735000 audit[3503]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3492 pid=3503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:22.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564353364383263386563643031326439396161353338653364373432 Jan 28 01:37:22.735000 audit: BPF prog-id=165 op=LOAD Jan 28 01:37:22.735000 audit[3503]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3492 pid=3503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:22.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564353364383263386563643031326439396161353338653364373432 Jan 28 01:37:22.735000 audit: BPF prog-id=166 op=LOAD Jan 28 01:37:22.735000 audit[3503]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3492 pid=3503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:22.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564353364383263386563643031326439396161353338653364373432 Jan 28 01:37:22.735000 audit: BPF prog-id=166 op=UNLOAD Jan 28 01:37:22.735000 audit[3503]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3492 pid=3503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:22.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564353364383263386563643031326439396161353338653364373432 Jan 28 01:37:22.735000 audit: BPF prog-id=165 op=UNLOAD Jan 28 01:37:22.735000 audit[3503]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3492 pid=3503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:22.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564353364383263386563643031326439396161353338653364373432 Jan 28 01:37:22.735000 audit: BPF prog-id=167 op=LOAD Jan 28 01:37:22.735000 audit[3503]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3492 pid=3503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:22.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564353364383263386563643031326439396161353338653364373432 Jan 28 01:37:22.992145 containerd[1612]: time="2026-01-28T01:37:22.990250666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b58f5b9cb-gk8gr,Uid:66bc5e9a-0202-4552-b4d5-082d38c96997,Namespace:calico-system,Attempt:0,} returns sandbox id \"ed53d82c8ecd012d99aa538e3d742078e8ce092d8470b22ef7ced7e65a93468e\"" Jan 28 01:37:22.996572 kubelet[2938]: E0128 01:37:22.996198 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:37:23.009003 containerd[1612]: time="2026-01-28T01:37:23.006873616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 28 01:37:23.592106 kubelet[2938]: E0128 01:37:23.585470 2938 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Jan 28 01:37:24.438845 kubelet[2938]: E0128 01:37:23.592459 2938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9771d8f7-48ae-40fd-a423-9de59c68df27-node-certs podName:9771d8f7-48ae-40fd-a423-9de59c68df27 nodeName:}" failed. No retries permitted until 2026-01-28 01:37:24.088137859 +0000 UTC m=+134.849610744 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/9771d8f7-48ae-40fd-a423-9de59c68df27-node-certs") pod "calico-node-dwj7n" (UID: "9771d8f7-48ae-40fd-a423-9de59c68df27") : failed to sync secret cache: timed out waiting for the condition Jan 28 01:37:24.509015 kubelet[2938]: E0128 01:37:24.506992 2938 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.136s" Jan 28 01:37:24.512386 kubelet[2938]: E0128 01:37:24.511370 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:37:24.513572 kubelet[2938]: E0128 01:37:24.508001 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:24.513758 kubelet[2938]: W0128 01:37:24.513577 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:24.513758 kubelet[2938]: E0128 01:37:24.513609 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:24.516988 kubelet[2938]: E0128 01:37:24.516820 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:24.516988 kubelet[2938]: W0128 01:37:24.516839 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:24.516988 kubelet[2938]: E0128 01:37:24.516862 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:24.519196 kubelet[2938]: E0128 01:37:24.518872 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:24.519453 kubelet[2938]: W0128 01:37:24.519431 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:24.519541 kubelet[2938]: E0128 01:37:24.519525 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:24.534157 kubelet[2938]: E0128 01:37:24.534112 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:24.537221 kubelet[2938]: W0128 01:37:24.534450 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:24.537221 kubelet[2938]: E0128 01:37:24.534484 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:24.539164 kubelet[2938]: E0128 01:37:24.539143 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:24.540569 kubelet[2938]: W0128 01:37:24.540548 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:24.544187 kubelet[2938]: E0128 01:37:24.544164 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:24.704532 kubelet[2938]: E0128 01:37:24.701118 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:24.704532 kubelet[2938]: W0128 01:37:24.701144 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:24.704532 kubelet[2938]: E0128 01:37:24.701177 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:25.265815 kubelet[2938]: E0128 01:37:25.262903 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:37:25.546586 containerd[1612]: time="2026-01-28T01:37:25.535873043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dwj7n,Uid:9771d8f7-48ae-40fd-a423-9de59c68df27,Namespace:calico-system,Attempt:0,}" Jan 28 01:37:26.381901 kubelet[2938]: E0128 01:37:26.378815 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:37:26.499502 containerd[1612]: time="2026-01-28T01:37:26.498568230Z" level=info msg="connecting to shim 2f42d9fe6c6670e570b6233f6cbcc960576781ed1cbfde21da1a0a8756ad1352" address="unix:///run/containerd/s/dced5e368dde98fea6ebe6905d4efb99c44003bcde6edf4bbdf5cb482e4bb8f6" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:37:26.817389 systemd[1]: Started cri-containerd-2f42d9fe6c6670e570b6233f6cbcc960576781ed1cbfde21da1a0a8756ad1352.scope - libcontainer container 2f42d9fe6c6670e570b6233f6cbcc960576781ed1cbfde21da1a0a8756ad1352. Jan 28 01:37:26.995000 audit: BPF prog-id=168 op=LOAD Jan 28 01:37:27.015927 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 28 01:37:27.016089 kernel: audit: type=1334 audit(1769564246.995:587): prog-id=168 op=LOAD Jan 28 01:37:26.998000 audit: BPF prog-id=169 op=LOAD Jan 28 01:37:27.050433 kernel: audit: type=1334 audit(1769564246.998:588): prog-id=169 op=LOAD Jan 28 01:37:26.998000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3651 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:26.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266343264396665366336363730653537306236323333663663626363 Jan 28 01:37:27.293441 kernel: audit: type=1300 audit(1769564246.998:588): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3651 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:27.295111 kernel: audit: type=1327 audit(1769564246.998:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266343264396665366336363730653537306236323333663663626363 Jan 28 01:37:27.295176 kernel: audit: type=1334 audit(1769564246.998:589): prog-id=169 op=UNLOAD Jan 28 01:37:26.998000 audit: BPF prog-id=169 op=UNLOAD Jan 28 01:37:26.998000 audit[3662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3651 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:27.453432 kernel: audit: type=1300 audit(1769564246.998:589): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3651 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:26.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266343264396665366336363730653537306236323333663663626363 Jan 28 01:37:27.492849 kernel: audit: type=1327 audit(1769564246.998:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266343264396665366336363730653537306236323333663663626363 Jan 28 01:37:27.525509 kernel: audit: type=1334 audit(1769564246.998:590): prog-id=170 op=LOAD Jan 28 01:37:27.525700 kernel: audit: type=1300 audit(1769564246.998:590): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3651 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:26.998000 audit: BPF prog-id=170 op=LOAD Jan 28 01:37:26.998000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3651 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:27.603899 kernel: audit: type=1327 audit(1769564246.998:590): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266343264396665366336363730653537306236323333663663626363 Jan 28 01:37:26.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266343264396665366336363730653537306236323333663663626363 Jan 28 01:37:27.604151 kubelet[2938]: E0128 01:37:27.575630 2938 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:37:26.998000 audit: BPF prog-id=171 op=LOAD Jan 28 01:37:26.998000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3651 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:26.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266343264396665366336363730653537306236323333663663626363 Jan 28 01:37:26.998000 audit: BPF prog-id=171 op=UNLOAD Jan 28 01:37:26.998000 audit[3662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3651 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:26.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266343264396665366336363730653537306236323333663663626363 Jan 28 01:37:26.998000 audit: BPF prog-id=170 op=UNLOAD Jan 28 01:37:26.998000 audit[3662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3651 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:26.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266343264396665366336363730653537306236323333663663626363 Jan 28 01:37:26.998000 audit: BPF prog-id=172 op=LOAD Jan 28 01:37:26.998000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3651 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:26.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266343264396665366336363730653537306236323333663663626363 Jan 28 01:37:28.307838 containerd[1612]: time="2026-01-28T01:37:28.307037609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dwj7n,Uid:9771d8f7-48ae-40fd-a423-9de59c68df27,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f42d9fe6c6670e570b6233f6cbcc960576781ed1cbfde21da1a0a8756ad1352\"" Jan 28 01:37:28.339378 kubelet[2938]: E0128 01:37:28.339326 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:37:28.426234 kubelet[2938]: E0128 01:37:28.425036 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:37:28.906350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount96521574.mount: Deactivated successfully. Jan 28 01:37:30.370144 kubelet[2938]: E0128 01:37:30.368228 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:37:32.427973 kubelet[2938]: E0128 01:37:32.424886 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:37:32.585923 kubelet[2938]: E0128 01:37:32.585866 2938 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:37:34.371790 kubelet[2938]: E0128 01:37:34.368077 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:37:36.404839 kubelet[2938]: E0128 01:37:36.388856 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:37:37.607382 kubelet[2938]: E0128 01:37:37.605951 2938 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:37:38.369559 kubelet[2938]: E0128 01:37:38.369486 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:37:40.052005 containerd[1612]: time="2026-01-28T01:37:40.051449067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:37:40.063048 containerd[1612]: time="2026-01-28T01:37:40.062963975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35230631" Jan 28 01:37:40.079207 containerd[1612]: time="2026-01-28T01:37:40.078236755Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:37:40.091588 containerd[1612]: time="2026-01-28T01:37:40.089995540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:37:40.092064 containerd[1612]: time="2026-01-28T01:37:40.091902506Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 17.084909487s" Jan 28 01:37:40.092064 containerd[1612]: time="2026-01-28T01:37:40.091947831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 28 01:37:40.098655 containerd[1612]: time="2026-01-28T01:37:40.098473132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 28 01:37:40.157780 containerd[1612]: time="2026-01-28T01:37:40.156816152Z" level=info msg="CreateContainer within sandbox \"ed53d82c8ecd012d99aa538e3d742078e8ce092d8470b22ef7ced7e65a93468e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 28 01:37:40.207854 containerd[1612]: time="2026-01-28T01:37:40.206799635Z" level=info msg="Container 75d2f2549c2366635033a0bce51007d5a100479ddcab72f021d3e8fd8f4964d3: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:37:40.285976 containerd[1612]: time="2026-01-28T01:37:40.280911956Z" level=info msg="CreateContainer within sandbox \"ed53d82c8ecd012d99aa538e3d742078e8ce092d8470b22ef7ced7e65a93468e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"75d2f2549c2366635033a0bce51007d5a100479ddcab72f021d3e8fd8f4964d3\"" Jan 28 01:37:40.285976 containerd[1612]: time="2026-01-28T01:37:40.284102820Z" level=info msg="StartContainer for \"75d2f2549c2366635033a0bce51007d5a100479ddcab72f021d3e8fd8f4964d3\"" Jan 28 01:37:40.298518 containerd[1612]: time="2026-01-28T01:37:40.298193292Z" level=info msg="connecting to shim 75d2f2549c2366635033a0bce51007d5a100479ddcab72f021d3e8fd8f4964d3" address="unix:///run/containerd/s/06060426e9c32a068382867dba64c95e12aac90d6252d8fcc0fce4973317c0f4" protocol=ttrpc version=3 Jan 28 01:37:40.369874 kubelet[2938]: E0128 01:37:40.369053 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:37:40.454911 systemd[1]: Started cri-containerd-75d2f2549c2366635033a0bce51007d5a100479ddcab72f021d3e8fd8f4964d3.scope - libcontainer container 75d2f2549c2366635033a0bce51007d5a100479ddcab72f021d3e8fd8f4964d3. Jan 28 01:37:40.550000 audit: BPF prog-id=173 op=LOAD Jan 28 01:37:40.573404 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 28 01:37:40.573688 kernel: audit: type=1334 audit(1769564260.550:595): prog-id=173 op=LOAD Jan 28 01:37:40.569000 audit: BPF prog-id=174 op=LOAD Jan 28 01:37:40.600094 kernel: audit: type=1334 audit(1769564260.569:596): prog-id=174 op=LOAD Jan 28 01:37:40.569000 audit[3697]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=3492 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:40.640830 kernel: audit: type=1300 audit(1769564260.569:596): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=3492 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:40.569000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643266323534396332333636363335303333613062636535313030 Jan 28 01:37:40.569000 audit: BPF prog-id=174 op=UNLOAD Jan 28 01:37:40.688512 kernel: audit: type=1327 audit(1769564260.569:596): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643266323534396332333636363335303333613062636535313030 Jan 28 01:37:40.688648 kernel: audit: type=1334 audit(1769564260.569:597): prog-id=174 op=UNLOAD Jan 28 01:37:40.688770 kernel: audit: type=1300 audit(1769564260.569:597): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3492 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:40.569000 audit[3697]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3492 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:40.728002 kernel: audit: type=1327 audit(1769564260.569:597): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643266323534396332333636363335303333613062636535313030 Jan 28 01:37:40.569000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643266323534396332333636363335303333613062636535313030 Jan 28 01:37:40.759534 kernel: audit: type=1334 audit(1769564260.570:598): prog-id=175 op=LOAD Jan 28 01:37:40.570000 audit: BPF prog-id=175 op=LOAD Jan 28 01:37:40.786363 kernel: audit: type=1300 audit(1769564260.570:598): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=3492 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:40.570000 audit[3697]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=3492 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:40.801344 kernel: audit: type=1327 audit(1769564260.570:598): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643266323534396332333636363335303333613062636535313030 Jan 28 01:37:40.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643266323534396332333636363335303333613062636535313030 Jan 28 01:37:40.570000 audit: BPF prog-id=176 op=LOAD Jan 28 01:37:40.570000 audit[3697]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=3492 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:40.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643266323534396332333636363335303333613062636535313030 Jan 28 01:37:40.570000 audit: BPF prog-id=176 op=UNLOAD Jan 28 01:37:40.570000 audit[3697]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3492 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:40.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643266323534396332333636363335303333613062636535313030 Jan 28 01:37:40.570000 audit: BPF prog-id=175 op=UNLOAD Jan 28 01:37:40.570000 audit[3697]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3492 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:40.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643266323534396332333636363335303333613062636535313030 Jan 28 01:37:40.570000 audit: BPF prog-id=177 op=LOAD Jan 28 01:37:40.570000 audit[3697]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=3492 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:40.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643266323534396332333636363335303333613062636535313030 Jan 28 01:37:40.882111 containerd[1612]: time="2026-01-28T01:37:40.882054510Z" level=info msg="StartContainer for \"75d2f2549c2366635033a0bce51007d5a100479ddcab72f021d3e8fd8f4964d3\" returns successfully" Jan 28 01:37:41.399321 kubelet[2938]: E0128 01:37:41.398097 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:37:41.491756 kubelet[2938]: I0128 01:37:41.491616 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7b58f5b9cb-gk8gr" podStartSLOduration=4.398185766 podStartE2EDuration="21.491574166s" podCreationTimestamp="2026-01-28 01:37:20 +0000 UTC" firstStartedPulling="2026-01-28 01:37:23.004399429 +0000 UTC m=+133.765872314" lastFinishedPulling="2026-01-28 01:37:40.097787829 +0000 UTC m=+150.859260714" observedRunningTime="2026-01-28 01:37:41.491541705 +0000 UTC m=+152.253014611" watchObservedRunningTime="2026-01-28 01:37:41.491574166 +0000 UTC m=+152.253047071" Jan 28 01:37:41.493772 kubelet[2938]: E0128 01:37:41.492460 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.493772 kubelet[2938]: W0128 01:37:41.492480 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.493772 kubelet[2938]: E0128 01:37:41.492504 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.494123 kubelet[2938]: E0128 01:37:41.494010 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.494123 kubelet[2938]: W0128 01:37:41.494033 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.494123 kubelet[2938]: E0128 01:37:41.494056 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.495122 kubelet[2938]: E0128 01:37:41.495018 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.495122 kubelet[2938]: W0128 01:37:41.495038 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.495122 kubelet[2938]: E0128 01:37:41.495057 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.501451 kubelet[2938]: E0128 01:37:41.495639 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.501451 kubelet[2938]: W0128 01:37:41.495657 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.501451 kubelet[2938]: E0128 01:37:41.495675 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.502586 kubelet[2938]: E0128 01:37:41.502556 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.505586 kubelet[2938]: W0128 01:37:41.505552 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.505692 kubelet[2938]: E0128 01:37:41.505674 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.506152 kubelet[2938]: E0128 01:37:41.506135 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.506241 kubelet[2938]: W0128 01:37:41.506225 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.506437 kubelet[2938]: E0128 01:37:41.506420 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.507049 kubelet[2938]: E0128 01:37:41.507028 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.507137 kubelet[2938]: W0128 01:37:41.507122 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.507211 kubelet[2938]: E0128 01:37:41.507196 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.507645 kubelet[2938]: E0128 01:37:41.507631 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.507801 kubelet[2938]: W0128 01:37:41.507784 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.507881 kubelet[2938]: E0128 01:37:41.507865 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.508192 kubelet[2938]: E0128 01:37:41.508180 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.508360 kubelet[2938]: W0128 01:37:41.508341 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.508444 kubelet[2938]: E0128 01:37:41.508430 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.511594 kubelet[2938]: E0128 01:37:41.511577 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.511689 kubelet[2938]: W0128 01:37:41.511674 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.511837 kubelet[2938]: E0128 01:37:41.511821 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.514209 kubelet[2938]: E0128 01:37:41.514192 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.514391 kubelet[2938]: W0128 01:37:41.514374 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.514481 kubelet[2938]: E0128 01:37:41.514466 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.514886 kubelet[2938]: E0128 01:37:41.514867 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.516488 kubelet[2938]: W0128 01:37:41.516235 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.516488 kubelet[2938]: E0128 01:37:41.516348 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.517893 kubelet[2938]: E0128 01:37:41.517794 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.517893 kubelet[2938]: W0128 01:37:41.517810 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.517893 kubelet[2938]: E0128 01:37:41.517827 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.521172 kubelet[2938]: E0128 01:37:41.521076 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.521172 kubelet[2938]: W0128 01:37:41.521094 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.521172 kubelet[2938]: E0128 01:37:41.521109 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.521779 kubelet[2938]: E0128 01:37:41.521610 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.521779 kubelet[2938]: W0128 01:37:41.521626 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.521779 kubelet[2938]: E0128 01:37:41.521645 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.573378 kubelet[2938]: E0128 01:37:41.573243 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.573570 kubelet[2938]: W0128 01:37:41.573546 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.573760 kubelet[2938]: E0128 01:37:41.573688 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.580648 kubelet[2938]: E0128 01:37:41.580614 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.581017 kubelet[2938]: W0128 01:37:41.580849 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.582381 kubelet[2938]: E0128 01:37:41.582341 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.586200 kubelet[2938]: E0128 01:37:41.585821 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.586200 kubelet[2938]: W0128 01:37:41.585844 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.586200 kubelet[2938]: E0128 01:37:41.585871 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.589200 kubelet[2938]: E0128 01:37:41.589175 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.589554 kubelet[2938]: W0128 01:37:41.589391 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.589964 kubelet[2938]: E0128 01:37:41.589939 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.593806 kubelet[2938]: E0128 01:37:41.593624 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.593806 kubelet[2938]: W0128 01:37:41.593647 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.594220 kubelet[2938]: E0128 01:37:41.594197 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.602831 kubelet[2938]: E0128 01:37:41.602469 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.602831 kubelet[2938]: W0128 01:37:41.602503 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.608992 kubelet[2938]: E0128 01:37:41.603063 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.617399 kubelet[2938]: E0128 01:37:41.617364 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.617832 kubelet[2938]: W0128 01:37:41.617799 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.619857 kubelet[2938]: E0128 01:37:41.619830 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.625491 kubelet[2938]: E0128 01:37:41.625461 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.625881 kubelet[2938]: W0128 01:37:41.625852 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.626124 kubelet[2938]: E0128 01:37:41.626106 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.631156 kubelet[2938]: E0128 01:37:41.631128 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.631561 kubelet[2938]: W0128 01:37:41.631538 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.631812 kubelet[2938]: E0128 01:37:41.631788 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.635092 kubelet[2938]: E0128 01:37:41.635071 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.635197 kubelet[2938]: W0128 01:37:41.635176 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.635548 kubelet[2938]: E0128 01:37:41.635524 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.650914 kubelet[2938]: E0128 01:37:41.642580 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.650914 kubelet[2938]: W0128 01:37:41.644959 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.650914 kubelet[2938]: E0128 01:37:41.645364 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.650914 kubelet[2938]: E0128 01:37:41.646907 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.650914 kubelet[2938]: W0128 01:37:41.646925 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.650914 kubelet[2938]: E0128 01:37:41.646997 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.650914 kubelet[2938]: E0128 01:37:41.647823 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.650914 kubelet[2938]: W0128 01:37:41.647838 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.650914 kubelet[2938]: E0128 01:37:41.647901 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.650914 kubelet[2938]: E0128 01:37:41.650043 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.652058 kubelet[2938]: W0128 01:37:41.650650 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.653374 kubelet[2938]: E0128 01:37:41.652635 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.680166 kubelet[2938]: E0128 01:37:41.673249 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.680166 kubelet[2938]: W0128 01:37:41.673360 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.680166 kubelet[2938]: E0128 01:37:41.673470 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.711169 kubelet[2938]: E0128 01:37:41.710925 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.711169 kubelet[2938]: W0128 01:37:41.710966 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.711780 kubelet[2938]: E0128 01:37:41.711550 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.711780 kubelet[2938]: W0128 01:37:41.711575 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.712211 kubelet[2938]: E0128 01:37:41.711951 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:41.712211 kubelet[2938]: W0128 01:37:41.711967 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:41.712211 kubelet[2938]: E0128 01:37:41.711989 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.712211 kubelet[2938]: E0128 01:37:41.712031 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:41.712211 kubelet[2938]: E0128 01:37:41.712049 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.030571 containerd[1612]: time="2026-01-28T01:37:42.030159175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:37:42.047452 containerd[1612]: time="2026-01-28T01:37:42.046891178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4442579" Jan 28 01:37:42.048610 containerd[1612]: time="2026-01-28T01:37:42.048523384Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:37:42.062046 containerd[1612]: time="2026-01-28T01:37:42.061578887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:37:42.076649 containerd[1612]: time="2026-01-28T01:37:42.075102232Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.976472517s" Jan 28 01:37:42.076649 containerd[1612]: time="2026-01-28T01:37:42.076527893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 28 01:37:42.114871 containerd[1612]: time="2026-01-28T01:37:42.114579300Z" level=info msg="CreateContainer within sandbox \"2f42d9fe6c6670e570b6233f6cbcc960576781ed1cbfde21da1a0a8756ad1352\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 28 01:37:42.320422 containerd[1612]: time="2026-01-28T01:37:42.320169064Z" level=info msg="Container d2ab8dd654be4139f5461f1432a080b825579e37ace95079cfff72b1099f0bfc: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:37:42.368423 kubelet[2938]: E0128 01:37:42.368242 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:37:42.405027 kubelet[2938]: E0128 01:37:42.404127 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:37:42.444562 containerd[1612]: time="2026-01-28T01:37:42.444325451Z" level=info msg="CreateContainer within sandbox \"2f42d9fe6c6670e570b6233f6cbcc960576781ed1cbfde21da1a0a8756ad1352\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d2ab8dd654be4139f5461f1432a080b825579e37ace95079cfff72b1099f0bfc\"" Jan 28 01:37:42.453968 containerd[1612]: time="2026-01-28T01:37:42.453921898Z" level=info msg="StartContainer for \"d2ab8dd654be4139f5461f1432a080b825579e37ace95079cfff72b1099f0bfc\"" Jan 28 01:37:42.457185 kubelet[2938]: E0128 01:37:42.455477 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.457185 kubelet[2938]: W0128 01:37:42.455508 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.457185 kubelet[2938]: E0128 01:37:42.455542 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.462167 kubelet[2938]: E0128 01:37:42.462142 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.462399 kubelet[2938]: W0128 01:37:42.462376 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.462964 kubelet[2938]: E0128 01:37:42.462537 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.489470 kubelet[2938]: E0128 01:37:42.489387 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.492991 kubelet[2938]: W0128 01:37:42.492916 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.496470 kubelet[2938]: E0128 01:37:42.493147 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.511600 containerd[1612]: time="2026-01-28T01:37:42.511488502Z" level=info msg="connecting to shim d2ab8dd654be4139f5461f1432a080b825579e37ace95079cfff72b1099f0bfc" address="unix:///run/containerd/s/dced5e368dde98fea6ebe6905d4efb99c44003bcde6edf4bbdf5cb482e4bb8f6" protocol=ttrpc version=3 Jan 28 01:37:42.516665 kubelet[2938]: E0128 01:37:42.515168 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.516665 kubelet[2938]: W0128 01:37:42.515198 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.516665 kubelet[2938]: E0128 01:37:42.515227 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.523559 kubelet[2938]: E0128 01:37:42.522809 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.524549 kubelet[2938]: W0128 01:37:42.524422 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.524549 kubelet[2938]: E0128 01:37:42.524464 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.537039 kubelet[2938]: E0128 01:37:42.536513 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.537039 kubelet[2938]: W0128 01:37:42.536535 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.537039 kubelet[2938]: E0128 01:37:42.536559 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.537434 kubelet[2938]: E0128 01:37:42.537203 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.537434 kubelet[2938]: W0128 01:37:42.537216 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.537434 kubelet[2938]: E0128 01:37:42.537232 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.540197 kubelet[2938]: E0128 01:37:42.540065 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.540197 kubelet[2938]: W0128 01:37:42.540087 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.540197 kubelet[2938]: E0128 01:37:42.540109 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.546402 kubelet[2938]: E0128 01:37:42.546380 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.546695 kubelet[2938]: W0128 01:37:42.546483 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.546695 kubelet[2938]: E0128 01:37:42.546513 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.549640 kubelet[2938]: E0128 01:37:42.548151 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.549640 kubelet[2938]: W0128 01:37:42.548170 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.549640 kubelet[2938]: E0128 01:37:42.548190 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.566918 kubelet[2938]: E0128 01:37:42.562148 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.567159 kubelet[2938]: W0128 01:37:42.567130 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.567409 kubelet[2938]: E0128 01:37:42.567384 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.624676 kubelet[2938]: E0128 01:37:42.622760 2938 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:37:42.642182 kubelet[2938]: E0128 01:37:42.642133 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.647879 kubelet[2938]: W0128 01:37:42.642380 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.647879 kubelet[2938]: E0128 01:37:42.642425 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.659410 kubelet[2938]: E0128 01:37:42.654244 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.659410 kubelet[2938]: W0128 01:37:42.654352 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.659410 kubelet[2938]: E0128 01:37:42.654379 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.659410 kubelet[2938]: E0128 01:37:42.654649 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.659410 kubelet[2938]: W0128 01:37:42.654658 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.659410 kubelet[2938]: E0128 01:37:42.654672 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.659410 kubelet[2938]: E0128 01:37:42.656210 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.659410 kubelet[2938]: W0128 01:37:42.656230 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.659410 kubelet[2938]: E0128 01:37:42.656248 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.659410 kubelet[2938]: E0128 01:37:42.657001 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.680474 kubelet[2938]: W0128 01:37:42.657015 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.680474 kubelet[2938]: E0128 01:37:42.657032 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.680474 kubelet[2938]: E0128 01:37:42.676104 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.680474 kubelet[2938]: W0128 01:37:42.676131 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.680474 kubelet[2938]: E0128 01:37:42.676158 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.688799 kubelet[2938]: E0128 01:37:42.688320 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.688799 kubelet[2938]: W0128 01:37:42.688351 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.690853 kubelet[2938]: E0128 01:37:42.690380 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.691951 kubelet[2938]: E0128 01:37:42.691803 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.692033 kubelet[2938]: W0128 01:37:42.692000 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.692764 kubelet[2938]: E0128 01:37:42.692528 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.698026 kubelet[2938]: E0128 01:37:42.695016 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.698026 kubelet[2938]: W0128 01:37:42.695337 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.698026 kubelet[2938]: E0128 01:37:42.695624 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.700586 kubelet[2938]: E0128 01:37:42.700455 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.700586 kubelet[2938]: W0128 01:37:42.700492 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.702151 kubelet[2938]: E0128 01:37:42.701960 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.703348 kubelet[2938]: E0128 01:37:42.703127 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.703348 kubelet[2938]: W0128 01:37:42.703144 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.703532 kubelet[2938]: E0128 01:37:42.703505 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.712420 kubelet[2938]: E0128 01:37:42.712381 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.712678 kubelet[2938]: W0128 01:37:42.712580 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.719600 kubelet[2938]: E0128 01:37:42.719549 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.726697 kubelet[2938]: E0128 01:37:42.726512 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.726697 kubelet[2938]: W0128 01:37:42.726536 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.726697 kubelet[2938]: E0128 01:37:42.726664 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.726697 kubelet[2938]: E0128 01:37:42.727693 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.726697 kubelet[2938]: W0128 01:37:42.727756 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.726697 kubelet[2938]: E0128 01:37:42.728874 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.734041 kubelet[2938]: E0128 01:37:42.732779 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.753867 kubelet[2938]: W0128 01:37:42.753139 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.753867 kubelet[2938]: E0128 01:37:42.753474 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.758386 kubelet[2938]: E0128 01:37:42.755780 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.758386 kubelet[2938]: W0128 01:37:42.757114 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.763418 kubelet[2938]: E0128 01:37:42.759193 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.763418 kubelet[2938]: E0128 01:37:42.762014 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.765918 kubelet[2938]: W0128 01:37:42.765652 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.768602 kubelet[2938]: E0128 01:37:42.768583 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.774334 kubelet[2938]: E0128 01:37:42.773335 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.774334 kubelet[2938]: W0128 01:37:42.773392 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.773566 systemd[1]: Started cri-containerd-d2ab8dd654be4139f5461f1432a080b825579e37ace95079cfff72b1099f0bfc.scope - libcontainer container d2ab8dd654be4139f5461f1432a080b825579e37ace95079cfff72b1099f0bfc. Jan 28 01:37:42.775074 kubelet[2938]: E0128 01:37:42.774950 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.777000 audit[3820]: NETFILTER_CFG table=filter:121 family=2 entries=21 op=nft_register_rule pid=3820 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:37:42.777000 audit[3820]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc44f05250 a2=0 a3=7ffc44f0523c items=0 ppid=3042 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:42.777000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:37:42.782684 kubelet[2938]: E0128 01:37:42.782594 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.782684 kubelet[2938]: W0128 01:37:42.782653 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.783448 kubelet[2938]: E0128 01:37:42.783380 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.790000 audit[3820]: NETFILTER_CFG table=nat:122 family=2 entries=19 op=nft_register_chain pid=3820 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:37:42.790000 audit[3820]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc44f05250 a2=0 a3=7ffc44f0523c items=0 ppid=3042 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:42.790000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:37:42.800904 kubelet[2938]: E0128 01:37:42.799839 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.800904 kubelet[2938]: W0128 01:37:42.799896 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.800904 kubelet[2938]: E0128 01:37:42.799928 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.813444 kubelet[2938]: E0128 01:37:42.812922 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.813444 kubelet[2938]: W0128 01:37:42.812951 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.813444 kubelet[2938]: E0128 01:37:42.812987 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:42.820376 kubelet[2938]: E0128 01:37:42.820023 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:37:42.820376 kubelet[2938]: W0128 01:37:42.820048 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:37:42.820376 kubelet[2938]: E0128 01:37:42.820075 2938 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:37:43.022000 audit: BPF prog-id=178 op=LOAD Jan 28 01:37:43.022000 audit[3788]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3651 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:43.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432616238646436353462653431333966353436316631343332613038 Jan 28 01:37:43.022000 audit: BPF prog-id=179 op=LOAD Jan 28 01:37:43.022000 audit[3788]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3651 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:43.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432616238646436353462653431333966353436316631343332613038 Jan 28 01:37:43.022000 audit: BPF prog-id=179 op=UNLOAD Jan 28 01:37:43.022000 audit[3788]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3651 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:43.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432616238646436353462653431333966353436316631343332613038 Jan 28 01:37:43.022000 audit: BPF prog-id=178 op=UNLOAD Jan 28 01:37:43.022000 audit[3788]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3651 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:43.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432616238646436353462653431333966353436316631343332613038 Jan 28 01:37:43.022000 audit: BPF prog-id=180 op=LOAD Jan 28 01:37:43.022000 audit[3788]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3651 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:43.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432616238646436353462653431333966353436316631343332613038 Jan 28 01:37:43.337099 containerd[1612]: time="2026-01-28T01:37:43.332961896Z" level=info msg="StartContainer for \"d2ab8dd654be4139f5461f1432a080b825579e37ace95079cfff72b1099f0bfc\" returns successfully" Jan 28 01:37:43.377863 systemd[1]: cri-containerd-d2ab8dd654be4139f5461f1432a080b825579e37ace95079cfff72b1099f0bfc.scope: Deactivated successfully. Jan 28 01:37:43.384008 containerd[1612]: time="2026-01-28T01:37:43.381635364Z" level=info msg="received container exit event container_id:\"d2ab8dd654be4139f5461f1432a080b825579e37ace95079cfff72b1099f0bfc\" id:\"d2ab8dd654be4139f5461f1432a080b825579e37ace95079cfff72b1099f0bfc\" pid:3825 exited_at:{seconds:1769564263 nanos:381162833}" Jan 28 01:37:43.383000 audit: BPF prog-id=180 op=UNLOAD Jan 28 01:37:43.454081 kubelet[2938]: E0128 01:37:43.447803 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:37:43.454081 kubelet[2938]: E0128 01:37:43.448108 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:37:43.576250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2ab8dd654be4139f5461f1432a080b825579e37ace95079cfff72b1099f0bfc-rootfs.mount: Deactivated successfully. Jan 28 01:37:45.031684 kubelet[2938]: E0128 01:37:44.995964 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:37:46.002135 kubelet[2938]: E0128 01:37:45.965881 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:37:46.627592 containerd[1612]: time="2026-01-28T01:37:46.626845727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 28 01:37:46.685958 kubelet[2938]: E0128 01:37:46.683824 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:37:47.473039 kubelet[2938]: E0128 01:37:47.471350 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:37:57.607091 kubelet[2938]: E0128 01:37:57.590404 2938 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:37:57.958514 kubelet[2938]: E0128 01:37:57.958219 2938 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.529s" Jan 28 01:37:58.009117 kubelet[2938]: E0128 01:37:58.008500 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:37:58.016654 kubelet[2938]: E0128 01:37:58.016613 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:37:59.377014 kubelet[2938]: E0128 01:37:59.374700 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:38:01.383398 kubelet[2938]: E0128 01:38:01.382869 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:38:02.606181 kubelet[2938]: E0128 01:38:02.606073 2938 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:38:03.389437 kubelet[2938]: E0128 01:38:03.388721 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:38:05.390537 kubelet[2938]: E0128 01:38:05.388392 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:38:06.488791 kubelet[2938]: E0128 01:38:06.482016 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:38:07.376894 kubelet[2938]: E0128 01:38:07.375558 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:38:07.638918 kubelet[2938]: E0128 01:38:07.631944 2938 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:38:09.371874 kubelet[2938]: E0128 01:38:09.371692 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:38:11.378593 kubelet[2938]: E0128 01:38:11.377027 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:38:12.636956 kubelet[2938]: E0128 01:38:12.636004 2938 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:38:13.390376 kubelet[2938]: E0128 01:38:13.388979 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:38:15.383463 kubelet[2938]: E0128 01:38:15.383031 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:38:16.887713 containerd[1612]: time="2026-01-28T01:38:16.887555777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:38:16.898354 containerd[1612]: time="2026-01-28T01:38:16.898186676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 28 01:38:16.906154 containerd[1612]: time="2026-01-28T01:38:16.906115982Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:38:16.927608 containerd[1612]: time="2026-01-28T01:38:16.927168948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:38:16.932067 containerd[1612]: time="2026-01-28T01:38:16.932026472Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 30.30507074s" Jan 28 01:38:16.932234 containerd[1612]: time="2026-01-28T01:38:16.932204916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 28 01:38:16.990681 containerd[1612]: time="2026-01-28T01:38:16.990615775Z" level=info msg="CreateContainer within sandbox \"2f42d9fe6c6670e570b6233f6cbcc960576781ed1cbfde21da1a0a8756ad1352\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 28 01:38:17.078367 containerd[1612]: time="2026-01-28T01:38:17.076979005Z" level=info msg="Container 7151d9a1aa6ba0c8dcef3c6f319da3796149f1980dacdec6af9af8c4f80e7759: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:38:17.165026 containerd[1612]: time="2026-01-28T01:38:17.160093511Z" level=info msg="CreateContainer within sandbox \"2f42d9fe6c6670e570b6233f6cbcc960576781ed1cbfde21da1a0a8756ad1352\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7151d9a1aa6ba0c8dcef3c6f319da3796149f1980dacdec6af9af8c4f80e7759\"" Jan 28 01:38:17.174007 containerd[1612]: time="2026-01-28T01:38:17.169795909Z" level=info msg="StartContainer for \"7151d9a1aa6ba0c8dcef3c6f319da3796149f1980dacdec6af9af8c4f80e7759\"" Jan 28 01:38:17.182070 containerd[1612]: time="2026-01-28T01:38:17.180836722Z" level=info msg="connecting to shim 7151d9a1aa6ba0c8dcef3c6f319da3796149f1980dacdec6af9af8c4f80e7759" address="unix:///run/containerd/s/dced5e368dde98fea6ebe6905d4efb99c44003bcde6edf4bbdf5cb482e4bb8f6" protocol=ttrpc version=3 Jan 28 01:38:17.371072 kubelet[2938]: E0128 01:38:17.369790 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:38:17.428787 systemd[1]: Started cri-containerd-7151d9a1aa6ba0c8dcef3c6f319da3796149f1980dacdec6af9af8c4f80e7759.scope - libcontainer container 7151d9a1aa6ba0c8dcef3c6f319da3796149f1980dacdec6af9af8c4f80e7759. Jan 28 01:38:17.652683 kubelet[2938]: E0128 01:38:17.651672 2938 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:38:17.762000 audit: BPF prog-id=181 op=LOAD Jan 28 01:38:17.779479 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 28 01:38:17.779838 kernel: audit: type=1334 audit(1769564297.762:611): prog-id=181 op=LOAD Jan 28 01:38:17.791364 kernel: audit: type=1300 audit(1769564297.762:611): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3651 pid=3879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:38:17.762000 audit[3879]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3651 pid=3879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:38:17.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731353164396131616136626130633864636566336336663331396461 Jan 28 01:38:17.771000 audit: BPF prog-id=182 op=LOAD Jan 28 01:38:17.990490 kernel: audit: type=1327 audit(1769564297.762:611): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731353164396131616136626130633864636566336336663331396461 Jan 28 01:38:18.003588 kernel: audit: type=1334 audit(1769564297.771:612): prog-id=182 op=LOAD Jan 28 01:38:18.005788 kernel: audit: type=1300 audit(1769564297.771:612): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3651 pid=3879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:38:17.771000 audit[3879]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3651 pid=3879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:38:17.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731353164396131616136626130633864636566336336663331396461 Jan 28 01:38:18.154951 kernel: audit: type=1327 audit(1769564297.771:612): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731353164396131616136626130633864636566336336663331396461 Jan 28 01:38:17.771000 audit: BPF prog-id=182 op=UNLOAD Jan 28 01:38:17.771000 audit[3879]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3651 pid=3879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:38:18.185388 kernel: audit: type=1334 audit(1769564297.771:613): prog-id=182 op=UNLOAD Jan 28 01:38:18.185526 kernel: audit: type=1300 audit(1769564297.771:613): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3651 pid=3879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:38:18.185577 kernel: audit: type=1327 audit(1769564297.771:613): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731353164396131616136626130633864636566336336663331396461 Jan 28 01:38:17.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731353164396131616136626130633864636566336336663331396461 Jan 28 01:38:18.224069 kernel: audit: type=1334 audit(1769564297.771:614): prog-id=181 op=UNLOAD Jan 28 01:38:17.771000 audit: BPF prog-id=181 op=UNLOAD Jan 28 01:38:17.771000 audit[3879]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3651 pid=3879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:38:17.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731353164396131616136626130633864636566336336663331396461 Jan 28 01:38:17.771000 audit: BPF prog-id=183 op=LOAD Jan 28 01:38:17.771000 audit[3879]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3651 pid=3879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:38:17.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731353164396131616136626130633864636566336336663331396461 Jan 28 01:38:18.284740 containerd[1612]: time="2026-01-28T01:38:18.284445090Z" level=info msg="StartContainer for \"7151d9a1aa6ba0c8dcef3c6f319da3796149f1980dacdec6af9af8c4f80e7759\" returns successfully" Jan 28 01:38:18.668384 kubelet[2938]: E0128 01:38:18.668180 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:38:19.372128 kubelet[2938]: E0128 01:38:19.371695 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:38:19.676061 kubelet[2938]: E0128 01:38:19.675572 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:38:21.370980 kubelet[2938]: E0128 01:38:21.370797 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:38:22.676428 kubelet[2938]: E0128 01:38:22.661825 2938 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:38:23.378706 kubelet[2938]: E0128 01:38:23.377950 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:38:25.371392 kubelet[2938]: E0128 01:38:25.370856 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:38:25.582709 systemd[1]: cri-containerd-7151d9a1aa6ba0c8dcef3c6f319da3796149f1980dacdec6af9af8c4f80e7759.scope: Deactivated successfully. Jan 28 01:38:25.583434 systemd[1]: cri-containerd-7151d9a1aa6ba0c8dcef3c6f319da3796149f1980dacdec6af9af8c4f80e7759.scope: Consumed 2.691s CPU time, 174M memory peak, 3.6M read from disk, 171.3M written to disk. Jan 28 01:38:25.587783 containerd[1612]: time="2026-01-28T01:38:25.587667245Z" level=info msg="received container exit event container_id:\"7151d9a1aa6ba0c8dcef3c6f319da3796149f1980dacdec6af9af8c4f80e7759\" id:\"7151d9a1aa6ba0c8dcef3c6f319da3796149f1980dacdec6af9af8c4f80e7759\" pid:3892 exited_at:{seconds:1769564305 nanos:586564579}" Jan 28 01:38:25.598000 audit: BPF prog-id=183 op=UNLOAD Jan 28 01:38:25.607204 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 28 01:38:25.612215 kernel: audit: type=1334 audit(1769564305.598:616): prog-id=183 op=UNLOAD Jan 28 01:38:25.920233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7151d9a1aa6ba0c8dcef3c6f319da3796149f1980dacdec6af9af8c4f80e7759-rootfs.mount: Deactivated successfully. Jan 28 01:38:27.045626 kubelet[2938]: E0128 01:38:27.041872 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:38:27.060755 containerd[1612]: time="2026-01-28T01:38:27.060711849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 28 01:38:27.381851 kubelet[2938]: E0128 01:38:27.379707 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:38:27.383001 kubelet[2938]: E0128 01:38:27.380802 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:38:28.770412 kubelet[2938]: I0128 01:38:28.770142 2938 status_manager.go:890] "Failed to get status for pod" podUID="d50b0b36-811a-467b-a5ed-e0483bb76784" pod="kube-system/coredns-668d6bf9bc-2lrhs" err="pods \"coredns-668d6bf9bc-2lrhs\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Jan 28 01:38:28.790193 systemd[1]: Created slice kubepods-burstable-podd50b0b36_811a_467b_a5ed_e0483bb76784.slice - libcontainer container kubepods-burstable-podd50b0b36_811a_467b_a5ed_e0483bb76784.slice. Jan 28 01:38:28.818203 kubelet[2938]: I0128 01:38:28.816072 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d50b0b36-811a-467b-a5ed-e0483bb76784-config-volume\") pod \"coredns-668d6bf9bc-2lrhs\" (UID: \"d50b0b36-811a-467b-a5ed-e0483bb76784\") " pod="kube-system/coredns-668d6bf9bc-2lrhs" Jan 28 01:38:28.818203 kubelet[2938]: I0128 01:38:28.817525 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lbz6\" (UniqueName: \"kubernetes.io/projected/d50b0b36-811a-467b-a5ed-e0483bb76784-kube-api-access-2lbz6\") pod \"coredns-668d6bf9bc-2lrhs\" (UID: \"d50b0b36-811a-467b-a5ed-e0483bb76784\") " pod="kube-system/coredns-668d6bf9bc-2lrhs" Jan 28 01:38:28.913047 systemd[1]: Created slice kubepods-besteffort-poddf1949f7_cac3_4cf6_8c60_f8d963a49163.slice - libcontainer container kubepods-besteffort-poddf1949f7_cac3_4cf6_8c60_f8d963a49163.slice. Jan 28 01:38:28.931559 kubelet[2938]: I0128 01:38:28.930700 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdbrh\" (UniqueName: \"kubernetes.io/projected/df1949f7-cac3-4cf6-8c60-f8d963a49163-kube-api-access-pdbrh\") pod \"calico-apiserver-7fcc88c58b-57jrt\" (UID: \"df1949f7-cac3-4cf6-8c60-f8d963a49163\") " pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" Jan 28 01:38:28.931559 kubelet[2938]: I0128 01:38:28.930804 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twgjj\" (UniqueName: \"kubernetes.io/projected/477c43dc-f740-4bfd-b59c-255fe52c8673-kube-api-access-twgjj\") pod \"calico-kube-controllers-78b6655f44-dr84p\" (UID: \"477c43dc-f740-4bfd-b59c-255fe52c8673\") " pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" Jan 28 01:38:28.931559 kubelet[2938]: I0128 01:38:28.930879 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0234091-1bfb-4c2b-914c-35e344cefc9d-config-volume\") pod \"coredns-668d6bf9bc-6cpzt\" (UID: \"f0234091-1bfb-4c2b-914c-35e344cefc9d\") " pod="kube-system/coredns-668d6bf9bc-6cpzt" Jan 28 01:38:28.931559 kubelet[2938]: I0128 01:38:28.930974 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcsg2\" (UniqueName: \"kubernetes.io/projected/f0234091-1bfb-4c2b-914c-35e344cefc9d-kube-api-access-bcsg2\") pod \"coredns-668d6bf9bc-6cpzt\" (UID: \"f0234091-1bfb-4c2b-914c-35e344cefc9d\") " pod="kube-system/coredns-668d6bf9bc-6cpzt" Jan 28 01:38:28.931559 kubelet[2938]: I0128 01:38:28.931008 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/477c43dc-f740-4bfd-b59c-255fe52c8673-tigera-ca-bundle\") pod \"calico-kube-controllers-78b6655f44-dr84p\" (UID: \"477c43dc-f740-4bfd-b59c-255fe52c8673\") " pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" Jan 28 01:38:28.931892 kubelet[2938]: I0128 01:38:28.931045 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/df1949f7-cac3-4cf6-8c60-f8d963a49163-calico-apiserver-certs\") pod \"calico-apiserver-7fcc88c58b-57jrt\" (UID: \"df1949f7-cac3-4cf6-8c60-f8d963a49163\") " pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" Jan 28 01:38:29.001110 systemd[1]: Created slice kubepods-burstable-podf0234091_1bfb_4c2b_914c_35e344cefc9d.slice - libcontainer container kubepods-burstable-podf0234091_1bfb_4c2b_914c_35e344cefc9d.slice. Jan 28 01:38:29.057112 kubelet[2938]: I0128 01:38:29.056390 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/38764aa9-f6ea-4a8f-ac0e-198fa6f97144-calico-apiserver-certs\") pod \"calico-apiserver-7fcc88c58b-n2mcr\" (UID: \"38764aa9-f6ea-4a8f-ac0e-198fa6f97144\") " pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" Jan 28 01:38:29.057112 kubelet[2938]: I0128 01:38:29.056489 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckwxn\" (UniqueName: \"kubernetes.io/projected/38764aa9-f6ea-4a8f-ac0e-198fa6f97144-kube-api-access-ckwxn\") pod \"calico-apiserver-7fcc88c58b-n2mcr\" (UID: \"38764aa9-f6ea-4a8f-ac0e-198fa6f97144\") " pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" Jan 28 01:38:29.057112 kubelet[2938]: I0128 01:38:29.056557 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e39d0520-24ca-4f27-b501-b31974cc3332-whisker-backend-key-pair\") pod \"whisker-9656966bd-lm4hd\" (UID: \"e39d0520-24ca-4f27-b501-b31974cc3332\") " pod="calico-system/whisker-9656966bd-lm4hd" Jan 28 01:38:29.057112 kubelet[2938]: I0128 01:38:29.056584 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj7j8\" (UniqueName: \"kubernetes.io/projected/e39d0520-24ca-4f27-b501-b31974cc3332-kube-api-access-sj7j8\") pod \"whisker-9656966bd-lm4hd\" (UID: \"e39d0520-24ca-4f27-b501-b31974cc3332\") " pod="calico-system/whisker-9656966bd-lm4hd" Jan 28 01:38:29.057112 kubelet[2938]: I0128 01:38:29.056608 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e39d0520-24ca-4f27-b501-b31974cc3332-whisker-ca-bundle\") pod \"whisker-9656966bd-lm4hd\" (UID: \"e39d0520-24ca-4f27-b501-b31974cc3332\") " pod="calico-system/whisker-9656966bd-lm4hd" Jan 28 01:38:29.058096 systemd[1]: Created slice kubepods-besteffort-pod477c43dc_f740_4bfd_b59c_255fe52c8673.slice - libcontainer container kubepods-besteffort-pod477c43dc_f740_4bfd_b59c_255fe52c8673.slice. Jan 28 01:38:29.087477 kubelet[2938]: I0128 01:38:29.056627 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55f83d8e-e337-4a1b-9dba-8df114668f11-goldmane-ca-bundle\") pod \"goldmane-666569f655-s28bh\" (UID: \"55f83d8e-e337-4a1b-9dba-8df114668f11\") " pod="calico-system/goldmane-666569f655-s28bh" Jan 28 01:38:29.087477 kubelet[2938]: I0128 01:38:29.083970 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqn7r\" (UniqueName: \"kubernetes.io/projected/55f83d8e-e337-4a1b-9dba-8df114668f11-kube-api-access-wqn7r\") pod \"goldmane-666569f655-s28bh\" (UID: \"55f83d8e-e337-4a1b-9dba-8df114668f11\") " pod="calico-system/goldmane-666569f655-s28bh" Jan 28 01:38:29.087477 kubelet[2938]: I0128 01:38:29.084026 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55f83d8e-e337-4a1b-9dba-8df114668f11-config\") pod \"goldmane-666569f655-s28bh\" (UID: \"55f83d8e-e337-4a1b-9dba-8df114668f11\") " pod="calico-system/goldmane-666569f655-s28bh" Jan 28 01:38:29.087477 kubelet[2938]: I0128 01:38:29.084207 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/55f83d8e-e337-4a1b-9dba-8df114668f11-goldmane-key-pair\") pod \"goldmane-666569f655-s28bh\" (UID: \"55f83d8e-e337-4a1b-9dba-8df114668f11\") " pod="calico-system/goldmane-666569f655-s28bh" Jan 28 01:38:29.192584 systemd[1]: Created slice kubepods-besteffort-pode39d0520_24ca_4f27_b501_b31974cc3332.slice - libcontainer container kubepods-besteffort-pode39d0520_24ca_4f27_b501_b31974cc3332.slice. Jan 28 01:38:29.344705 systemd[1]: Created slice kubepods-besteffort-pod38764aa9_f6ea_4a8f_ac0e_198fa6f97144.slice - libcontainer container kubepods-besteffort-pod38764aa9_f6ea_4a8f_ac0e_198fa6f97144.slice. Jan 28 01:38:29.394440 systemd[1]: Created slice kubepods-besteffort-pod55f83d8e_e337_4a1b_9dba_8df114668f11.slice - libcontainer container kubepods-besteffort-pod55f83d8e_e337_4a1b_9dba_8df114668f11.slice. Jan 28 01:38:29.667441 kubelet[2938]: E0128 01:38:29.664458 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:38:29.677175 kubelet[2938]: E0128 01:38:29.673727 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:38:29.690495 containerd[1612]: time="2026-01-28T01:38:29.688886451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2lrhs,Uid:d50b0b36-811a-467b-a5ed-e0483bb76784,Namespace:kube-system,Attempt:0,}" Jan 28 01:38:29.713589 systemd[1]: Created slice kubepods-besteffort-poda7aeb9de_99dc_45ef_b9ad_d9f2afb967ef.slice - libcontainer container kubepods-besteffort-poda7aeb9de_99dc_45ef_b9ad_d9f2afb967ef.slice. Jan 28 01:38:29.722374 containerd[1612]: time="2026-01-28T01:38:29.715220633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78b6655f44-dr84p,Uid:477c43dc-f740-4bfd-b59c-255fe52c8673,Namespace:calico-system,Attempt:0,}" Jan 28 01:38:29.771516 containerd[1612]: time="2026-01-28T01:38:29.769870994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-57jrt,Uid:df1949f7-cac3-4cf6-8c60-f8d963a49163,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:38:29.873475 containerd[1612]: time="2026-01-28T01:38:29.870437429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6cpzt,Uid:f0234091-1bfb-4c2b-914c-35e344cefc9d,Namespace:kube-system,Attempt:0,}" Jan 28 01:38:29.916411 containerd[1612]: time="2026-01-28T01:38:29.916250329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4245v,Uid:a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef,Namespace:calico-system,Attempt:0,}" Jan 28 01:38:29.972690 containerd[1612]: time="2026-01-28T01:38:29.959621752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9656966bd-lm4hd,Uid:e39d0520-24ca-4f27-b501-b31974cc3332,Namespace:calico-system,Attempt:0,}" Jan 28 01:38:29.990444 containerd[1612]: time="2026-01-28T01:38:29.990392804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s28bh,Uid:55f83d8e-e337-4a1b-9dba-8df114668f11,Namespace:calico-system,Attempt:0,}" Jan 28 01:38:30.320753 containerd[1612]: time="2026-01-28T01:38:30.311808669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-n2mcr,Uid:38764aa9-f6ea-4a8f-ac0e-198fa6f97144,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:38:30.407226 kubelet[2938]: E0128 01:38:30.406353 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:38:32.614349 containerd[1612]: time="2026-01-28T01:38:32.610641068Z" level=error msg="Failed to destroy network for sandbox \"e572851bda7615bd684e98c5052928e0e5a6b958f39b71fe3281f0fefb3d5271\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:32.635470 systemd[1]: run-netns-cni\x2dac6ca8fb\x2de079\x2dc7fc\x2d280a\x2d5681ca4cb7a8.mount: Deactivated successfully. Jan 28 01:38:32.738692 containerd[1612]: time="2026-01-28T01:38:32.733044708Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2lrhs,Uid:d50b0b36-811a-467b-a5ed-e0483bb76784,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e572851bda7615bd684e98c5052928e0e5a6b958f39b71fe3281f0fefb3d5271\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:32.762117 kubelet[2938]: E0128 01:38:32.754239 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e572851bda7615bd684e98c5052928e0e5a6b958f39b71fe3281f0fefb3d5271\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:32.762117 kubelet[2938]: E0128 01:38:32.759061 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e572851bda7615bd684e98c5052928e0e5a6b958f39b71fe3281f0fefb3d5271\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2lrhs" Jan 28 01:38:32.762117 kubelet[2938]: E0128 01:38:32.759237 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e572851bda7615bd684e98c5052928e0e5a6b958f39b71fe3281f0fefb3d5271\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2lrhs" Jan 28 01:38:32.762885 kubelet[2938]: E0128 01:38:32.759487 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2lrhs_kube-system(d50b0b36-811a-467b-a5ed-e0483bb76784)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2lrhs_kube-system(d50b0b36-811a-467b-a5ed-e0483bb76784)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e572851bda7615bd684e98c5052928e0e5a6b958f39b71fe3281f0fefb3d5271\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2lrhs" podUID="d50b0b36-811a-467b-a5ed-e0483bb76784" Jan 28 01:38:33.356827 containerd[1612]: time="2026-01-28T01:38:33.355812743Z" level=error msg="Failed to destroy network for sandbox \"6e65aa7bee0b3dc858ba1cd324b6be130e82da701afdd31f8c7a66ddb34ce907\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:33.376550 containerd[1612]: time="2026-01-28T01:38:33.370750789Z" level=error msg="Failed to destroy network for sandbox \"e6f8465d7515a3e032c3efb842fcb0989e8be7562d632ddd39369c88086b83b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:33.397844 systemd[1]: run-netns-cni\x2d7dd1cf74\x2d3990\x2df2ea\x2d9e51\x2d21d86065109c.mount: Deactivated successfully. Jan 28 01:38:33.398232 systemd[1]: run-netns-cni\x2d9432d94d\x2d05e1\x2d8e2d\x2d104d\x2dedff730c6a23.mount: Deactivated successfully. Jan 28 01:38:33.432567 containerd[1612]: time="2026-01-28T01:38:33.422466930Z" level=error msg="Failed to destroy network for sandbox \"9d7fdc5eeed4280ceb37bb7ea2d461c5101127642c82621eb708ed5d191d43fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:33.432567 containerd[1612]: time="2026-01-28T01:38:33.425425376Z" level=error msg="Failed to destroy network for sandbox \"705bb7c412055d7ddc4c934e90df3adb224e4c593e8bda5c0f0b5b30dd561263\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:33.429226 systemd[1]: run-netns-cni\x2d3b122fd5\x2dd397\x2d7ab6\x2de7ae\x2d3a719d46492c.mount: Deactivated successfully. Jan 28 01:38:33.438692 systemd[1]: run-netns-cni\x2d3b5a42ac\x2de453\x2dcbdf\x2d0378\x2d1d1539f0c48a.mount: Deactivated successfully. Jan 28 01:38:33.456783 containerd[1612]: time="2026-01-28T01:38:33.451150827Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9656966bd-lm4hd,Uid:e39d0520-24ca-4f27-b501-b31974cc3332,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e65aa7bee0b3dc858ba1cd324b6be130e82da701afdd31f8c7a66ddb34ce907\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:33.458214 kubelet[2938]: E0128 01:38:33.458060 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e65aa7bee0b3dc858ba1cd324b6be130e82da701afdd31f8c7a66ddb34ce907\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:33.462039 kubelet[2938]: E0128 01:38:33.461710 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e65aa7bee0b3dc858ba1cd324b6be130e82da701afdd31f8c7a66ddb34ce907\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9656966bd-lm4hd" Jan 28 01:38:33.462039 kubelet[2938]: E0128 01:38:33.461864 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e65aa7bee0b3dc858ba1cd324b6be130e82da701afdd31f8c7a66ddb34ce907\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9656966bd-lm4hd" Jan 28 01:38:33.462039 kubelet[2938]: E0128 01:38:33.462003 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-9656966bd-lm4hd_calico-system(e39d0520-24ca-4f27-b501-b31974cc3332)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-9656966bd-lm4hd_calico-system(e39d0520-24ca-4f27-b501-b31974cc3332)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e65aa7bee0b3dc858ba1cd324b6be130e82da701afdd31f8c7a66ddb34ce907\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-9656966bd-lm4hd" podUID="e39d0520-24ca-4f27-b501-b31974cc3332" Jan 28 01:38:33.480981 containerd[1612]: time="2026-01-28T01:38:33.475587848Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4245v,Uid:a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6f8465d7515a3e032c3efb842fcb0989e8be7562d632ddd39369c88086b83b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:33.490903 kubelet[2938]: E0128 01:38:33.489555 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6f8465d7515a3e032c3efb842fcb0989e8be7562d632ddd39369c88086b83b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:33.490903 kubelet[2938]: E0128 01:38:33.489634 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6f8465d7515a3e032c3efb842fcb0989e8be7562d632ddd39369c88086b83b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4245v" Jan 28 01:38:33.490903 kubelet[2938]: E0128 01:38:33.489668 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6f8465d7515a3e032c3efb842fcb0989e8be7562d632ddd39369c88086b83b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4245v" Jan 28 01:38:33.491764 kubelet[2938]: E0128 01:38:33.489722 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4245v_calico-system(a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4245v_calico-system(a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6f8465d7515a3e032c3efb842fcb0989e8be7562d632ddd39369c88086b83b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:38:33.523060 containerd[1612]: time="2026-01-28T01:38:33.512734609Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s28bh,Uid:55f83d8e-e337-4a1b-9dba-8df114668f11,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d7fdc5eeed4280ceb37bb7ea2d461c5101127642c82621eb708ed5d191d43fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:33.523060 containerd[1612]: time="2026-01-28T01:38:33.521239588Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78b6655f44-dr84p,Uid:477c43dc-f740-4bfd-b59c-255fe52c8673,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"705bb7c412055d7ddc4c934e90df3adb224e4c593e8bda5c0f0b5b30dd561263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:33.523868 kubelet[2938]: E0128 01:38:33.513804 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d7fdc5eeed4280ceb37bb7ea2d461c5101127642c82621eb708ed5d191d43fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:33.561365 kubelet[2938]: E0128 01:38:33.538750 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"705bb7c412055d7ddc4c934e90df3adb224e4c593e8bda5c0f0b5b30dd561263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:33.561365 kubelet[2938]: E0128 01:38:33.538830 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"705bb7c412055d7ddc4c934e90df3adb224e4c593e8bda5c0f0b5b30dd561263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" Jan 28 01:38:33.561365 kubelet[2938]: E0128 01:38:33.538858 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"705bb7c412055d7ddc4c934e90df3adb224e4c593e8bda5c0f0b5b30dd561263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" Jan 28 01:38:33.561365 kubelet[2938]: E0128 01:38:33.513880 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d7fdc5eeed4280ceb37bb7ea2d461c5101127642c82621eb708ed5d191d43fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-s28bh" Jan 28 01:38:33.561746 kubelet[2938]: E0128 01:38:33.539763 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d7fdc5eeed4280ceb37bb7ea2d461c5101127642c82621eb708ed5d191d43fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-s28bh" Jan 28 01:38:33.561746 kubelet[2938]: E0128 01:38:33.540404 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-s28bh_calico-system(55f83d8e-e337-4a1b-9dba-8df114668f11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-s28bh_calico-system(55f83d8e-e337-4a1b-9dba-8df114668f11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d7fdc5eeed4280ceb37bb7ea2d461c5101127642c82621eb708ed5d191d43fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-s28bh" podUID="55f83d8e-e337-4a1b-9dba-8df114668f11" Jan 28 01:38:33.565250 kubelet[2938]: E0128 01:38:33.538912 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78b6655f44-dr84p_calico-system(477c43dc-f740-4bfd-b59c-255fe52c8673)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78b6655f44-dr84p_calico-system(477c43dc-f740-4bfd-b59c-255fe52c8673)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"705bb7c412055d7ddc4c934e90df3adb224e4c593e8bda5c0f0b5b30dd561263\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" podUID="477c43dc-f740-4bfd-b59c-255fe52c8673" Jan 28 01:38:33.581636 containerd[1612]: time="2026-01-28T01:38:33.581502754Z" level=error msg="Failed to destroy network for sandbox \"2210b683f628d9e5f34ecc9ec4e50151e58a51a268c5d995a12f29dbdd0070a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:33.624249 containerd[1612]: time="2026-01-28T01:38:33.618362399Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-n2mcr,Uid:38764aa9-f6ea-4a8f-ac0e-198fa6f97144,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2210b683f628d9e5f34ecc9ec4e50151e58a51a268c5d995a12f29dbdd0070a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:33.624997 kubelet[2938]: E0128 01:38:33.618633 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2210b683f628d9e5f34ecc9ec4e50151e58a51a268c5d995a12f29dbdd0070a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:33.624997 kubelet[2938]: E0128 01:38:33.620025 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2210b683f628d9e5f34ecc9ec4e50151e58a51a268c5d995a12f29dbdd0070a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" Jan 28 01:38:33.624997 kubelet[2938]: E0128 01:38:33.620064 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2210b683f628d9e5f34ecc9ec4e50151e58a51a268c5d995a12f29dbdd0070a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" Jan 28 01:38:33.615625 systemd[1]: run-netns-cni\x2ddd4f532d\x2ddd19\x2df373\x2d8ccd\x2d43039813c7a3.mount: Deactivated successfully. Jan 28 01:38:33.635618 kubelet[2938]: E0128 01:38:33.620121 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcc88c58b-n2mcr_calico-apiserver(38764aa9-f6ea-4a8f-ac0e-198fa6f97144)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcc88c58b-n2mcr_calico-apiserver(38764aa9-f6ea-4a8f-ac0e-198fa6f97144)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2210b683f628d9e5f34ecc9ec4e50151e58a51a268c5d995a12f29dbdd0070a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" podUID="38764aa9-f6ea-4a8f-ac0e-198fa6f97144" Jan 28 01:38:33.685503 containerd[1612]: time="2026-01-28T01:38:33.678730730Z" level=error msg="Failed to destroy network for sandbox \"de48cce0e88a7049d318d6dd02490043279756f196484ae3e5a696720e4af78f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:33.712429 systemd[1]: run-netns-cni\x2d73414520\x2d41af\x2d71de\x2dca9d\x2d69497fec0f9f.mount: Deactivated successfully. Jan 28 01:38:33.736390 containerd[1612]: time="2026-01-28T01:38:33.735804564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-57jrt,Uid:df1949f7-cac3-4cf6-8c60-f8d963a49163,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"de48cce0e88a7049d318d6dd02490043279756f196484ae3e5a696720e4af78f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:33.737424 kubelet[2938]: E0128 01:38:33.737192 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de48cce0e88a7049d318d6dd02490043279756f196484ae3e5a696720e4af78f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:33.737578 kubelet[2938]: E0128 01:38:33.737492 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de48cce0e88a7049d318d6dd02490043279756f196484ae3e5a696720e4af78f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" Jan 28 01:38:33.737578 kubelet[2938]: E0128 01:38:33.737535 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de48cce0e88a7049d318d6dd02490043279756f196484ae3e5a696720e4af78f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" Jan 28 01:38:33.738239 kubelet[2938]: E0128 01:38:33.737595 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcc88c58b-57jrt_calico-apiserver(df1949f7-cac3-4cf6-8c60-f8d963a49163)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcc88c58b-57jrt_calico-apiserver(df1949f7-cac3-4cf6-8c60-f8d963a49163)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de48cce0e88a7049d318d6dd02490043279756f196484ae3e5a696720e4af78f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" podUID="df1949f7-cac3-4cf6-8c60-f8d963a49163" Jan 28 01:38:33.786519 containerd[1612]: time="2026-01-28T01:38:33.786237476Z" level=error msg="Failed to destroy network for sandbox \"aacae1adf4242438b58e1630fd74827859fc984ed379ab713a27022585d90642\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:33.810783 systemd[1]: run-netns-cni\x2d434c4afd\x2d5e92\x2ddd59\x2d613a\x2d6e57031c6ce3.mount: Deactivated successfully. Jan 28 01:38:33.820629 containerd[1612]: time="2026-01-28T01:38:33.819750914Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6cpzt,Uid:f0234091-1bfb-4c2b-914c-35e344cefc9d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aacae1adf4242438b58e1630fd74827859fc984ed379ab713a27022585d90642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:33.829602 kubelet[2938]: E0128 01:38:33.827424 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aacae1adf4242438b58e1630fd74827859fc984ed379ab713a27022585d90642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:33.829602 kubelet[2938]: E0128 01:38:33.827559 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aacae1adf4242438b58e1630fd74827859fc984ed379ab713a27022585d90642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6cpzt" Jan 28 01:38:33.829602 kubelet[2938]: E0128 01:38:33.827586 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aacae1adf4242438b58e1630fd74827859fc984ed379ab713a27022585d90642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6cpzt" Jan 28 01:38:33.830410 kubelet[2938]: E0128 01:38:33.827640 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6cpzt_kube-system(f0234091-1bfb-4c2b-914c-35e344cefc9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6cpzt_kube-system(f0234091-1bfb-4c2b-914c-35e344cefc9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aacae1adf4242438b58e1630fd74827859fc984ed379ab713a27022585d90642\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6cpzt" podUID="f0234091-1bfb-4c2b-914c-35e344cefc9d" Jan 28 01:38:44.381084 containerd[1612]: time="2026-01-28T01:38:44.380633199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78b6655f44-dr84p,Uid:477c43dc-f740-4bfd-b59c-255fe52c8673,Namespace:calico-system,Attempt:0,}" Jan 28 01:38:45.025662 containerd[1612]: time="2026-01-28T01:38:45.017664701Z" level=error msg="Failed to destroy network for sandbox \"c21546365b4152dc7f578f46db703505a7f187a1c7a9020e52522ac2503fed72\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:45.025709 systemd[1]: run-netns-cni\x2de8846048\x2d273a\x2db182\x2d086a\x2dc8ab830448e2.mount: Deactivated successfully. Jan 28 01:38:45.064777 containerd[1612]: time="2026-01-28T01:38:45.062223551Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78b6655f44-dr84p,Uid:477c43dc-f740-4bfd-b59c-255fe52c8673,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c21546365b4152dc7f578f46db703505a7f187a1c7a9020e52522ac2503fed72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:45.072380 kubelet[2938]: E0128 01:38:45.069670 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c21546365b4152dc7f578f46db703505a7f187a1c7a9020e52522ac2503fed72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:45.072380 kubelet[2938]: E0128 01:38:45.069757 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c21546365b4152dc7f578f46db703505a7f187a1c7a9020e52522ac2503fed72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" Jan 28 01:38:45.072380 kubelet[2938]: E0128 01:38:45.069789 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c21546365b4152dc7f578f46db703505a7f187a1c7a9020e52522ac2503fed72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" Jan 28 01:38:45.092080 kubelet[2938]: E0128 01:38:45.069844 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78b6655f44-dr84p_calico-system(477c43dc-f740-4bfd-b59c-255fe52c8673)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78b6655f44-dr84p_calico-system(477c43dc-f740-4bfd-b59c-255fe52c8673)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c21546365b4152dc7f578f46db703505a7f187a1c7a9020e52522ac2503fed72\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" podUID="477c43dc-f740-4bfd-b59c-255fe52c8673" Jan 28 01:38:45.411924 kubelet[2938]: E0128 01:38:45.384906 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:38:45.516604 containerd[1612]: time="2026-01-28T01:38:45.514893194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6cpzt,Uid:f0234091-1bfb-4c2b-914c-35e344cefc9d,Namespace:kube-system,Attempt:0,}" Jan 28 01:38:46.246797 containerd[1612]: time="2026-01-28T01:38:46.246732545Z" level=error msg="Failed to destroy network for sandbox \"d0d8dd2266958c75d7bad0fb56e1fc04e0e1d8ea865eeddbcd95d14ed8ac7190\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:46.264806 systemd[1]: run-netns-cni\x2da7e142fe\x2d28a4\x2d5781\x2dc4eb\x2d22bf62ecd9d3.mount: Deactivated successfully. Jan 28 01:38:46.269354 containerd[1612]: time="2026-01-28T01:38:46.269081550Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6cpzt,Uid:f0234091-1bfb-4c2b-914c-35e344cefc9d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0d8dd2266958c75d7bad0fb56e1fc04e0e1d8ea865eeddbcd95d14ed8ac7190\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:46.271945 kubelet[2938]: E0128 01:38:46.270662 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0d8dd2266958c75d7bad0fb56e1fc04e0e1d8ea865eeddbcd95d14ed8ac7190\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:46.271945 kubelet[2938]: E0128 01:38:46.270750 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0d8dd2266958c75d7bad0fb56e1fc04e0e1d8ea865eeddbcd95d14ed8ac7190\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6cpzt" Jan 28 01:38:46.271945 kubelet[2938]: E0128 01:38:46.270779 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0d8dd2266958c75d7bad0fb56e1fc04e0e1d8ea865eeddbcd95d14ed8ac7190\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6cpzt" Jan 28 01:38:46.273751 kubelet[2938]: E0128 01:38:46.270839 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6cpzt_kube-system(f0234091-1bfb-4c2b-914c-35e344cefc9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6cpzt_kube-system(f0234091-1bfb-4c2b-914c-35e344cefc9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0d8dd2266958c75d7bad0fb56e1fc04e0e1d8ea865eeddbcd95d14ed8ac7190\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6cpzt" podUID="f0234091-1bfb-4c2b-914c-35e344cefc9d" Jan 28 01:38:46.371601 kubelet[2938]: E0128 01:38:46.371552 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:38:46.389238 containerd[1612]: time="2026-01-28T01:38:46.384588541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9656966bd-lm4hd,Uid:e39d0520-24ca-4f27-b501-b31974cc3332,Namespace:calico-system,Attempt:0,}" Jan 28 01:38:46.389238 containerd[1612]: time="2026-01-28T01:38:46.387465559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2lrhs,Uid:d50b0b36-811a-467b-a5ed-e0483bb76784,Namespace:kube-system,Attempt:0,}" Jan 28 01:38:46.389238 containerd[1612]: time="2026-01-28T01:38:46.388156488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s28bh,Uid:55f83d8e-e337-4a1b-9dba-8df114668f11,Namespace:calico-system,Attempt:0,}" Jan 28 01:38:47.379124 containerd[1612]: time="2026-01-28T01:38:47.377573670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-n2mcr,Uid:38764aa9-f6ea-4a8f-ac0e-198fa6f97144,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:38:47.379124 containerd[1612]: time="2026-01-28T01:38:47.378402226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-57jrt,Uid:df1949f7-cac3-4cf6-8c60-f8d963a49163,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:38:47.388742 containerd[1612]: time="2026-01-28T01:38:47.388598049Z" level=error msg="Failed to destroy network for sandbox \"a16628c710a86ae2453443a9f1aaa1617521e62635eae730725bb381a69d244c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:47.392920 systemd[1]: run-netns-cni\x2d3dad9ea4\x2d6d8e\x2dd090\x2d7d06\x2d329a5e68c2ae.mount: Deactivated successfully. Jan 28 01:38:47.579780 containerd[1612]: time="2026-01-28T01:38:47.578912735Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9656966bd-lm4hd,Uid:e39d0520-24ca-4f27-b501-b31974cc3332,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a16628c710a86ae2453443a9f1aaa1617521e62635eae730725bb381a69d244c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:47.600401 kubelet[2938]: E0128 01:38:47.596893 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a16628c710a86ae2453443a9f1aaa1617521e62635eae730725bb381a69d244c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:47.600401 kubelet[2938]: E0128 01:38:47.598177 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a16628c710a86ae2453443a9f1aaa1617521e62635eae730725bb381a69d244c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9656966bd-lm4hd" Jan 28 01:38:47.600401 kubelet[2938]: E0128 01:38:47.598214 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a16628c710a86ae2453443a9f1aaa1617521e62635eae730725bb381a69d244c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9656966bd-lm4hd" Jan 28 01:38:47.626352 kubelet[2938]: E0128 01:38:47.598372 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-9656966bd-lm4hd_calico-system(e39d0520-24ca-4f27-b501-b31974cc3332)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-9656966bd-lm4hd_calico-system(e39d0520-24ca-4f27-b501-b31974cc3332)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a16628c710a86ae2453443a9f1aaa1617521e62635eae730725bb381a69d244c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-9656966bd-lm4hd" podUID="e39d0520-24ca-4f27-b501-b31974cc3332" Jan 28 01:38:47.779786 containerd[1612]: time="2026-01-28T01:38:47.769764138Z" level=error msg="Failed to destroy network for sandbox \"701fda824f05277b6e4b06bac10471808def982fc30cc939bfee9ac39ddc822c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:47.802363 systemd[1]: run-netns-cni\x2d8e194414\x2d2034\x2d60d6\x2da2a4\x2d005d9a11cb77.mount: Deactivated successfully. Jan 28 01:38:47.847452 containerd[1612]: time="2026-01-28T01:38:47.847375554Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2lrhs,Uid:d50b0b36-811a-467b-a5ed-e0483bb76784,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"701fda824f05277b6e4b06bac10471808def982fc30cc939bfee9ac39ddc822c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:47.862152 kubelet[2938]: E0128 01:38:47.861852 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"701fda824f05277b6e4b06bac10471808def982fc30cc939bfee9ac39ddc822c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:47.862152 kubelet[2938]: E0128 01:38:47.862050 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"701fda824f05277b6e4b06bac10471808def982fc30cc939bfee9ac39ddc822c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2lrhs" Jan 28 01:38:47.862152 kubelet[2938]: E0128 01:38:47.862092 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"701fda824f05277b6e4b06bac10471808def982fc30cc939bfee9ac39ddc822c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2lrhs" Jan 28 01:38:47.862593 kubelet[2938]: E0128 01:38:47.862148 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2lrhs_kube-system(d50b0b36-811a-467b-a5ed-e0483bb76784)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2lrhs_kube-system(d50b0b36-811a-467b-a5ed-e0483bb76784)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"701fda824f05277b6e4b06bac10471808def982fc30cc939bfee9ac39ddc822c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2lrhs" podUID="d50b0b36-811a-467b-a5ed-e0483bb76784" Jan 28 01:38:48.212845 containerd[1612]: time="2026-01-28T01:38:48.212733614Z" level=error msg="Failed to destroy network for sandbox \"1dd6b30a0d419b627b005d787dabb7ca0a82c84e2a42baaed7d08be87c730665\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:48.224939 systemd[1]: run-netns-cni\x2d24cfc093\x2d18a2\x2d9355\x2d5230\x2d99bc1a36e631.mount: Deactivated successfully. Jan 28 01:38:48.267413 containerd[1612]: time="2026-01-28T01:38:48.263956297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s28bh,Uid:55f83d8e-e337-4a1b-9dba-8df114668f11,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dd6b30a0d419b627b005d787dabb7ca0a82c84e2a42baaed7d08be87c730665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:48.270807 kubelet[2938]: E0128 01:38:48.266143 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dd6b30a0d419b627b005d787dabb7ca0a82c84e2a42baaed7d08be87c730665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:48.270807 kubelet[2938]: E0128 01:38:48.267203 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dd6b30a0d419b627b005d787dabb7ca0a82c84e2a42baaed7d08be87c730665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-s28bh" Jan 28 01:38:48.270807 kubelet[2938]: E0128 01:38:48.267354 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dd6b30a0d419b627b005d787dabb7ca0a82c84e2a42baaed7d08be87c730665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-s28bh" Jan 28 01:38:48.281554 kubelet[2938]: E0128 01:38:48.267523 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-s28bh_calico-system(55f83d8e-e337-4a1b-9dba-8df114668f11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-s28bh_calico-system(55f83d8e-e337-4a1b-9dba-8df114668f11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1dd6b30a0d419b627b005d787dabb7ca0a82c84e2a42baaed7d08be87c730665\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-s28bh" podUID="55f83d8e-e337-4a1b-9dba-8df114668f11" Jan 28 01:38:48.387421 containerd[1612]: time="2026-01-28T01:38:48.386595429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4245v,Uid:a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef,Namespace:calico-system,Attempt:0,}" Jan 28 01:38:48.546946 containerd[1612]: time="2026-01-28T01:38:48.544772685Z" level=error msg="Failed to destroy network for sandbox \"e015007d2b344af150b95a9813993614ad62802bfe944b0d99f606b894a8fcbd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:48.560737 systemd[1]: run-netns-cni\x2dfc8ab3d2\x2d8936\x2d529b\x2d21cb\x2db66a8c2c9915.mount: Deactivated successfully. Jan 28 01:38:48.582162 containerd[1612]: time="2026-01-28T01:38:48.576780151Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-n2mcr,Uid:38764aa9-f6ea-4a8f-ac0e-198fa6f97144,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e015007d2b344af150b95a9813993614ad62802bfe944b0d99f606b894a8fcbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:48.595458 kubelet[2938]: E0128 01:38:48.591540 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e015007d2b344af150b95a9813993614ad62802bfe944b0d99f606b894a8fcbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:48.595458 kubelet[2938]: E0128 01:38:48.591621 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e015007d2b344af150b95a9813993614ad62802bfe944b0d99f606b894a8fcbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" Jan 28 01:38:48.595458 kubelet[2938]: E0128 01:38:48.591651 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e015007d2b344af150b95a9813993614ad62802bfe944b0d99f606b894a8fcbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" Jan 28 01:38:48.606740 kubelet[2938]: E0128 01:38:48.592138 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcc88c58b-n2mcr_calico-apiserver(38764aa9-f6ea-4a8f-ac0e-198fa6f97144)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcc88c58b-n2mcr_calico-apiserver(38764aa9-f6ea-4a8f-ac0e-198fa6f97144)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e015007d2b344af150b95a9813993614ad62802bfe944b0d99f606b894a8fcbd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" podUID="38764aa9-f6ea-4a8f-ac0e-198fa6f97144" Jan 28 01:38:48.748770 containerd[1612]: time="2026-01-28T01:38:48.748714454Z" level=error msg="Failed to destroy network for sandbox \"189fec12fa718995b6243fd3b069715d232dacecda26019bdd0b5e7fd35c7f34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:48.773082 systemd[1]: run-netns-cni\x2d3f1b9913\x2df35b\x2d977b\x2d4e0c\x2d17d123e5ec16.mount: Deactivated successfully. Jan 28 01:38:48.795045 containerd[1612]: time="2026-01-28T01:38:48.794923675Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-57jrt,Uid:df1949f7-cac3-4cf6-8c60-f8d963a49163,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"189fec12fa718995b6243fd3b069715d232dacecda26019bdd0b5e7fd35c7f34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:48.797469 kubelet[2938]: E0128 01:38:48.796724 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"189fec12fa718995b6243fd3b069715d232dacecda26019bdd0b5e7fd35c7f34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:48.797469 kubelet[2938]: E0128 01:38:48.797055 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"189fec12fa718995b6243fd3b069715d232dacecda26019bdd0b5e7fd35c7f34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" Jan 28 01:38:48.799754 kubelet[2938]: E0128 01:38:48.798653 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"189fec12fa718995b6243fd3b069715d232dacecda26019bdd0b5e7fd35c7f34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" Jan 28 01:38:48.799754 kubelet[2938]: E0128 01:38:48.798960 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcc88c58b-57jrt_calico-apiserver(df1949f7-cac3-4cf6-8c60-f8d963a49163)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcc88c58b-57jrt_calico-apiserver(df1949f7-cac3-4cf6-8c60-f8d963a49163)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"189fec12fa718995b6243fd3b069715d232dacecda26019bdd0b5e7fd35c7f34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" podUID="df1949f7-cac3-4cf6-8c60-f8d963a49163" Jan 28 01:38:48.848498 containerd[1612]: time="2026-01-28T01:38:48.841181034Z" level=error msg="Failed to destroy network for sandbox \"fc99c17f02f333c5c8409498f6cab0a277324a8d054eb8913e111efd09ca8a42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:48.874902 systemd[1]: run-netns-cni\x2d0e78c667\x2deef9\x2d85bc\x2d18b9\x2dcae3a6ce009c.mount: Deactivated successfully. Jan 28 01:38:48.935859 containerd[1612]: time="2026-01-28T01:38:48.874462750Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4245v,Uid:a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc99c17f02f333c5c8409498f6cab0a277324a8d054eb8913e111efd09ca8a42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:48.937860 kubelet[2938]: E0128 01:38:48.937459 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc99c17f02f333c5c8409498f6cab0a277324a8d054eb8913e111efd09ca8a42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:38:48.937860 kubelet[2938]: E0128 01:38:48.937545 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc99c17f02f333c5c8409498f6cab0a277324a8d054eb8913e111efd09ca8a42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4245v" Jan 28 01:38:48.937860 kubelet[2938]: E0128 01:38:48.937578 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc99c17f02f333c5c8409498f6cab0a277324a8d054eb8913e111efd09ca8a42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4245v" Jan 28 01:38:48.938143 kubelet[2938]: E0128 01:38:48.937640 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4245v_calico-system(a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4245v_calico-system(a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc99c17f02f333c5c8409498f6cab0a277324a8d054eb8913e111efd09ca8a42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:38:55.385658 kubelet[2938]: E0128 01:38:55.385428 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:38:58.492991 kubelet[2938]: E0128 01:38:58.490854 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:38:59.997952 containerd[1612]: time="2026-01-28T01:38:59.977566685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6cpzt,Uid:f0234091-1bfb-4c2b-914c-35e344cefc9d,Namespace:kube-system,Attempt:0,}" Jan 28 01:39:04.497932 kubelet[2938]: E0128 01:39:04.496830 2938 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.571s" Jan 28 01:39:04.512824 kubelet[2938]: E0128 01:39:04.511876 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:39:04.518167 kubelet[2938]: E0128 01:39:04.517979 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:39:04.518984 containerd[1612]: time="2026-01-28T01:39:04.518863002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9656966bd-lm4hd,Uid:e39d0520-24ca-4f27-b501-b31974cc3332,Namespace:calico-system,Attempt:0,}" Jan 28 01:39:04.519646 containerd[1612]: time="2026-01-28T01:39:04.519131454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2lrhs,Uid:d50b0b36-811a-467b-a5ed-e0483bb76784,Namespace:kube-system,Attempt:0,}" Jan 28 01:39:04.813861 containerd[1612]: time="2026-01-28T01:39:04.806979529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s28bh,Uid:55f83d8e-e337-4a1b-9dba-8df114668f11,Namespace:calico-system,Attempt:0,}" Jan 28 01:39:04.930469 containerd[1612]: time="2026-01-28T01:39:04.920521978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4245v,Uid:a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef,Namespace:calico-system,Attempt:0,}" Jan 28 01:39:05.175685 containerd[1612]: time="2026-01-28T01:39:05.171413587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-57jrt,Uid:df1949f7-cac3-4cf6-8c60-f8d963a49163,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:39:05.176180 containerd[1612]: time="2026-01-28T01:39:05.176126215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78b6655f44-dr84p,Uid:477c43dc-f740-4bfd-b59c-255fe52c8673,Namespace:calico-system,Attempt:0,}" Jan 28 01:39:05.182431 containerd[1612]: time="2026-01-28T01:39:05.177970219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-n2mcr,Uid:38764aa9-f6ea-4a8f-ac0e-198fa6f97144,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:39:06.496207 kubelet[2938]: E0128 01:39:06.469694 2938 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.101s" Jan 28 01:39:08.429623 kubelet[2938]: E0128 01:39:08.429580 2938 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.922s" Jan 28 01:39:12.877822 kubelet[2938]: E0128 01:39:12.870983 2938 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.432s" Jan 28 01:39:14.067843 containerd[1612]: time="2026-01-28T01:39:14.067527327Z" level=error msg="Failed to destroy network for sandbox \"7e5d276e197c228dac53fafc33a7805ccc7090a5ce1196f6054b11e6a460da70\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:14.080921 systemd[1]: run-netns-cni\x2ddd9004fc\x2d31cf\x2d4250\x2db9e9\x2d730a2afb541d.mount: Deactivated successfully. Jan 28 01:39:14.195723 containerd[1612]: time="2026-01-28T01:39:14.188664599Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6cpzt,Uid:f0234091-1bfb-4c2b-914c-35e344cefc9d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e5d276e197c228dac53fafc33a7805ccc7090a5ce1196f6054b11e6a460da70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:14.204940 kubelet[2938]: E0128 01:39:14.204886 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e5d276e197c228dac53fafc33a7805ccc7090a5ce1196f6054b11e6a460da70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:14.213699 kubelet[2938]: E0128 01:39:14.205752 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e5d276e197c228dac53fafc33a7805ccc7090a5ce1196f6054b11e6a460da70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6cpzt" Jan 28 01:39:14.213699 kubelet[2938]: E0128 01:39:14.205794 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e5d276e197c228dac53fafc33a7805ccc7090a5ce1196f6054b11e6a460da70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6cpzt" Jan 28 01:39:14.213699 kubelet[2938]: E0128 01:39:14.205906 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6cpzt_kube-system(f0234091-1bfb-4c2b-914c-35e344cefc9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6cpzt_kube-system(f0234091-1bfb-4c2b-914c-35e344cefc9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e5d276e197c228dac53fafc33a7805ccc7090a5ce1196f6054b11e6a460da70\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6cpzt" podUID="f0234091-1bfb-4c2b-914c-35e344cefc9d" Jan 28 01:39:14.956602 containerd[1612]: time="2026-01-28T01:39:14.823041694Z" level=error msg="Failed to destroy network for sandbox \"6251fd12e88a659fc2f9eacc143cc14bcd5853041abb6ee739069d1904b69cf2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:15.002795 systemd[1]: run-netns-cni\x2d832cff48\x2de691\x2dc9ca\x2ddb71\x2d94bf6e41274a.mount: Deactivated successfully. Jan 28 01:39:15.233663 containerd[1612]: time="2026-01-28T01:39:15.232874674Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-57jrt,Uid:df1949f7-cac3-4cf6-8c60-f8d963a49163,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6251fd12e88a659fc2f9eacc143cc14bcd5853041abb6ee739069d1904b69cf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:15.240565 kubelet[2938]: E0128 01:39:15.233607 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6251fd12e88a659fc2f9eacc143cc14bcd5853041abb6ee739069d1904b69cf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:15.240565 kubelet[2938]: E0128 01:39:15.233737 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6251fd12e88a659fc2f9eacc143cc14bcd5853041abb6ee739069d1904b69cf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" Jan 28 01:39:15.240565 kubelet[2938]: E0128 01:39:15.233848 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6251fd12e88a659fc2f9eacc143cc14bcd5853041abb6ee739069d1904b69cf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" Jan 28 01:39:15.247682 kubelet[2938]: E0128 01:39:15.233998 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcc88c58b-57jrt_calico-apiserver(df1949f7-cac3-4cf6-8c60-f8d963a49163)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcc88c58b-57jrt_calico-apiserver(df1949f7-cac3-4cf6-8c60-f8d963a49163)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6251fd12e88a659fc2f9eacc143cc14bcd5853041abb6ee739069d1904b69cf2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" podUID="df1949f7-cac3-4cf6-8c60-f8d963a49163" Jan 28 01:39:16.196611 containerd[1612]: time="2026-01-28T01:39:16.111589450Z" level=error msg="Failed to destroy network for sandbox \"c906a192f21d22f0d8261885d28043e2d82c5e240692442aee567e61ce709763\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:16.208010 systemd[1]: run-netns-cni\x2d1f9bf147\x2dc605\x2d2466\x2daf9c\x2dd29c8b306fb9.mount: Deactivated successfully. Jan 28 01:39:16.409828 containerd[1612]: time="2026-01-28T01:39:16.407951296Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9656966bd-lm4hd,Uid:e39d0520-24ca-4f27-b501-b31974cc3332,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c906a192f21d22f0d8261885d28043e2d82c5e240692442aee567e61ce709763\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:16.470658 kubelet[2938]: E0128 01:39:16.432808 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c906a192f21d22f0d8261885d28043e2d82c5e240692442aee567e61ce709763\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:16.470658 kubelet[2938]: E0128 01:39:16.432888 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c906a192f21d22f0d8261885d28043e2d82c5e240692442aee567e61ce709763\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9656966bd-lm4hd" Jan 28 01:39:16.470658 kubelet[2938]: E0128 01:39:16.432918 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c906a192f21d22f0d8261885d28043e2d82c5e240692442aee567e61ce709763\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9656966bd-lm4hd" Jan 28 01:39:16.472920 kubelet[2938]: E0128 01:39:16.432976 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-9656966bd-lm4hd_calico-system(e39d0520-24ca-4f27-b501-b31974cc3332)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-9656966bd-lm4hd_calico-system(e39d0520-24ca-4f27-b501-b31974cc3332)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c906a192f21d22f0d8261885d28043e2d82c5e240692442aee567e61ce709763\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-9656966bd-lm4hd" podUID="e39d0520-24ca-4f27-b501-b31974cc3332" Jan 28 01:39:16.591807 containerd[1612]: time="2026-01-28T01:39:16.591227962Z" level=error msg="Failed to destroy network for sandbox \"fd93c85f7d2240713bbdac3bb1cccc12dd369dd1153c16986a3d8743026b105a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:16.685017 systemd[1]: run-netns-cni\x2d064020d2\x2d45dc\x2d8809\x2d6dd4\x2dad24f6fa5260.mount: Deactivated successfully. Jan 28 01:39:16.712031 containerd[1612]: time="2026-01-28T01:39:16.711974949Z" level=error msg="Failed to destroy network for sandbox \"f2f877312465f7925fd713030aa25329e87ba23d652f443dcd355e3551954f7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:16.756524 systemd[1]: run-netns-cni\x2d6e32943e\x2d9668\x2d83c0\x2d8738\x2d67be4d6e35be.mount: Deactivated successfully. Jan 28 01:39:16.863559 containerd[1612]: time="2026-01-28T01:39:16.798695486Z" level=error msg="Failed to destroy network for sandbox \"e41cd14ac7fb627077f15b5f33addc4b841df9d4b506b7e653b44853bc274897\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:16.837924 systemd[1]: run-netns-cni\x2d4084bf99\x2dbec3\x2d8755\x2d7117\x2da6a6e0f4de15.mount: Deactivated successfully. Jan 28 01:39:17.399529 containerd[1612]: time="2026-01-28T01:39:17.396051067Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-n2mcr,Uid:38764aa9-f6ea-4a8f-ac0e-198fa6f97144,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd93c85f7d2240713bbdac3bb1cccc12dd369dd1153c16986a3d8743026b105a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:17.410356 kubelet[2938]: E0128 01:39:17.409429 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd93c85f7d2240713bbdac3bb1cccc12dd369dd1153c16986a3d8743026b105a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:17.410356 kubelet[2938]: E0128 01:39:17.409509 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd93c85f7d2240713bbdac3bb1cccc12dd369dd1153c16986a3d8743026b105a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" Jan 28 01:39:17.410356 kubelet[2938]: E0128 01:39:17.409544 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd93c85f7d2240713bbdac3bb1cccc12dd369dd1153c16986a3d8743026b105a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" Jan 28 01:39:17.410666 kubelet[2938]: E0128 01:39:17.409609 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcc88c58b-n2mcr_calico-apiserver(38764aa9-f6ea-4a8f-ac0e-198fa6f97144)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcc88c58b-n2mcr_calico-apiserver(38764aa9-f6ea-4a8f-ac0e-198fa6f97144)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd93c85f7d2240713bbdac3bb1cccc12dd369dd1153c16986a3d8743026b105a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" podUID="38764aa9-f6ea-4a8f-ac0e-198fa6f97144" Jan 28 01:39:17.684842 containerd[1612]: time="2026-01-28T01:39:17.429737603Z" level=info msg="container event discarded" container=a56bd6992403f89973ada7ad3355f99bf77cb3f3034dd5fa7cff9dcc493606df type=CONTAINER_CREATED_EVENT Jan 28 01:39:17.725612 containerd[1612]: time="2026-01-28T01:39:17.519650468Z" level=error msg="Failed to destroy network for sandbox \"fa0e2688088514ea1583ed4b0238f2d663c9b0a1e4287968e513940c97da0404\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:17.766618 systemd[1]: run-netns-cni\x2d56aba4d8\x2d22cb\x2d6779\x2dc6a5\x2d3870e5179b8a.mount: Deactivated successfully. Jan 28 01:39:17.777880 containerd[1612]: time="2026-01-28T01:39:17.521404481Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78b6655f44-dr84p,Uid:477c43dc-f740-4bfd-b59c-255fe52c8673,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2f877312465f7925fd713030aa25329e87ba23d652f443dcd355e3551954f7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:17.781161 containerd[1612]: time="2026-01-28T01:39:17.528477731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2lrhs,Uid:d50b0b36-811a-467b-a5ed-e0483bb76784,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e41cd14ac7fb627077f15b5f33addc4b841df9d4b506b7e653b44853bc274897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:17.802836 kubelet[2938]: E0128 01:39:17.786606 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e41cd14ac7fb627077f15b5f33addc4b841df9d4b506b7e653b44853bc274897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:17.802836 kubelet[2938]: E0128 01:39:17.786682 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e41cd14ac7fb627077f15b5f33addc4b841df9d4b506b7e653b44853bc274897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2lrhs" Jan 28 01:39:17.802836 kubelet[2938]: E0128 01:39:17.786712 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e41cd14ac7fb627077f15b5f33addc4b841df9d4b506b7e653b44853bc274897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2lrhs" Jan 28 01:39:17.802836 kubelet[2938]: E0128 01:39:17.787684 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2f877312465f7925fd713030aa25329e87ba23d652f443dcd355e3551954f7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:17.808549 kubelet[2938]: E0128 01:39:17.787717 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2f877312465f7925fd713030aa25329e87ba23d652f443dcd355e3551954f7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" Jan 28 01:39:17.808549 kubelet[2938]: E0128 01:39:17.787741 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2f877312465f7925fd713030aa25329e87ba23d652f443dcd355e3551954f7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" Jan 28 01:39:17.808549 kubelet[2938]: E0128 01:39:17.787790 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78b6655f44-dr84p_calico-system(477c43dc-f740-4bfd-b59c-255fe52c8673)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78b6655f44-dr84p_calico-system(477c43dc-f740-4bfd-b59c-255fe52c8673)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2f877312465f7925fd713030aa25329e87ba23d652f443dcd355e3551954f7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" podUID="477c43dc-f740-4bfd-b59c-255fe52c8673" Jan 28 01:39:17.808862 kubelet[2938]: E0128 01:39:17.788463 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2lrhs_kube-system(d50b0b36-811a-467b-a5ed-e0483bb76784)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2lrhs_kube-system(d50b0b36-811a-467b-a5ed-e0483bb76784)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e41cd14ac7fb627077f15b5f33addc4b841df9d4b506b7e653b44853bc274897\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2lrhs" podUID="d50b0b36-811a-467b-a5ed-e0483bb76784" Jan 28 01:39:17.864630 containerd[1612]: time="2026-01-28T01:39:17.864563554Z" level=error msg="Failed to destroy network for sandbox \"b24ad09d984997095e413d554b0a0698fc30c51d20985468af318a50a339a536\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:17.865147 containerd[1612]: time="2026-01-28T01:39:17.865039231Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s28bh,Uid:55f83d8e-e337-4a1b-9dba-8df114668f11,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa0e2688088514ea1583ed4b0238f2d663c9b0a1e4287968e513940c97da0404\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:17.878434 kubelet[2938]: E0128 01:39:17.869733 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa0e2688088514ea1583ed4b0238f2d663c9b0a1e4287968e513940c97da0404\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:17.878434 kubelet[2938]: E0128 01:39:17.869919 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa0e2688088514ea1583ed4b0238f2d663c9b0a1e4287968e513940c97da0404\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-s28bh" Jan 28 01:39:17.878434 kubelet[2938]: E0128 01:39:17.869961 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa0e2688088514ea1583ed4b0238f2d663c9b0a1e4287968e513940c97da0404\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-s28bh" Jan 28 01:39:17.878614 kubelet[2938]: E0128 01:39:17.870026 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-s28bh_calico-system(55f83d8e-e337-4a1b-9dba-8df114668f11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-s28bh_calico-system(55f83d8e-e337-4a1b-9dba-8df114668f11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa0e2688088514ea1583ed4b0238f2d663c9b0a1e4287968e513940c97da0404\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-s28bh" podUID="55f83d8e-e337-4a1b-9dba-8df114668f11" Jan 28 01:39:17.897447 systemd[1]: run-netns-cni\x2d296ba346\x2dc9d2\x2d87e5\x2d0aea\x2dceff1a638b51.mount: Deactivated successfully. Jan 28 01:39:18.025780 containerd[1612]: time="2026-01-28T01:39:17.982581728Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4245v,Uid:a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b24ad09d984997095e413d554b0a0698fc30c51d20985468af318a50a339a536\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:18.038199 kubelet[2938]: E0128 01:39:18.038150 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b24ad09d984997095e413d554b0a0698fc30c51d20985468af318a50a339a536\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:18.038570 kubelet[2938]: E0128 01:39:18.038539 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b24ad09d984997095e413d554b0a0698fc30c51d20985468af318a50a339a536\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4245v" Jan 28 01:39:18.038694 kubelet[2938]: E0128 01:39:18.038668 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b24ad09d984997095e413d554b0a0698fc30c51d20985468af318a50a339a536\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4245v" Jan 28 01:39:18.046546 kubelet[2938]: E0128 01:39:18.038822 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4245v_calico-system(a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4245v_calico-system(a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b24ad09d984997095e413d554b0a0698fc30c51d20985468af318a50a339a536\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:39:18.248038 containerd[1612]: time="2026-01-28T01:39:18.232405310Z" level=info msg="container event discarded" container=a56bd6992403f89973ada7ad3355f99bf77cb3f3034dd5fa7cff9dcc493606df type=CONTAINER_STARTED_EVENT Jan 28 01:39:18.297763 containerd[1612]: time="2026-01-28T01:39:18.297506283Z" level=info msg="container event discarded" container=646a081358b454bfc4ed890ccfcbe5ac5ad7f82c77034df4b4ca181abd77a0c1 type=CONTAINER_CREATED_EVENT Jan 28 01:39:18.312357 containerd[1612]: time="2026-01-28T01:39:18.298051269Z" level=info msg="container event discarded" container=646a081358b454bfc4ed890ccfcbe5ac5ad7f82c77034df4b4ca181abd77a0c1 type=CONTAINER_STARTED_EVENT Jan 28 01:39:18.312642 containerd[1612]: time="2026-01-28T01:39:18.312606241Z" level=info msg="container event discarded" container=445b97fe862586ea73433f9721f78906e725171e7b89602fa2c525dea695a384 type=CONTAINER_CREATED_EVENT Jan 28 01:39:18.312746 containerd[1612]: time="2026-01-28T01:39:18.312725012Z" level=info msg="container event discarded" container=445b97fe862586ea73433f9721f78906e725171e7b89602fa2c525dea695a384 type=CONTAINER_STARTED_EVENT Jan 28 01:39:18.312921 containerd[1612]: time="2026-01-28T01:39:18.312899328Z" level=info msg="container event discarded" container=7818e0f0981fabe3c589b306b9f44d735eba4dee07167d9aadb3ef62f716a2c6 type=CONTAINER_CREATED_EVENT Jan 28 01:39:18.313001 containerd[1612]: time="2026-01-28T01:39:18.312982633Z" level=info msg="container event discarded" container=bca85ac4f8f009994a35d6e89f6ce4ac782c6cec4fd3bb8329adfd0bbf299d83 type=CONTAINER_CREATED_EVENT Jan 28 01:39:18.419173 containerd[1612]: time="2026-01-28T01:39:18.418478845Z" level=info msg="container event discarded" container=7c0919d9526146648a8e41a1693364d2f00f55c67b4c09e553e1095e49ea45b9 type=CONTAINER_CREATED_EVENT Jan 28 01:39:19.964357 containerd[1612]: time="2026-01-28T01:39:19.964040537Z" level=info msg="container event discarded" container=bca85ac4f8f009994a35d6e89f6ce4ac782c6cec4fd3bb8329adfd0bbf299d83 type=CONTAINER_STARTED_EVENT Jan 28 01:39:19.964357 containerd[1612]: time="2026-01-28T01:39:19.964350586Z" level=info msg="container event discarded" container=7818e0f0981fabe3c589b306b9f44d735eba4dee07167d9aadb3ef62f716a2c6 type=CONTAINER_STARTED_EVENT Jan 28 01:39:20.039123 containerd[1612]: time="2026-01-28T01:39:20.028877607Z" level=info msg="container event discarded" container=7c0919d9526146648a8e41a1693364d2f00f55c67b4c09e553e1095e49ea45b9 type=CONTAINER_STARTED_EVENT Jan 28 01:39:20.247742 systemd[1]: Started sshd@8-10.0.0.88:22-10.0.0.1:35802.service - OpenSSH per-connection server daemon (10.0.0.1:35802). Jan 28 01:39:20.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.88:22-10.0.0.1:35802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:39:20.349235 kernel: audit: type=1130 audit(1769564360.254:617): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.88:22-10.0.0.1:35802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:39:21.223000 audit[4721]: USER_ACCT pid=4721 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:21.233903 sshd[4721]: Accepted publickey for core from 10.0.0.1 port 35802 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:39:21.243969 sshd-session[4721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:39:21.235000 audit[4721]: CRED_ACQ pid=4721 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:21.369250 kernel: audit: type=1101 audit(1769564361.223:618): pid=4721 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:21.369500 kernel: audit: type=1103 audit(1769564361.235:619): pid=4721 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:21.369622 kernel: audit: type=1006 audit(1769564361.235:620): pid=4721 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 28 01:39:21.372938 systemd-logind[1594]: New session 9 of user core. Jan 28 01:39:21.433241 kernel: audit: type=1300 audit(1769564361.235:620): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7f13e8c0 a2=3 a3=0 items=0 ppid=1 pid=4721 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:39:21.235000 audit[4721]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7f13e8c0 a2=3 a3=0 items=0 ppid=1 pid=4721 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:39:21.235000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:39:21.529831 kernel: audit: type=1327 audit(1769564361.235:620): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:39:21.531203 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 01:39:21.610000 audit[4721]: USER_START pid=4721 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:21.743978 kernel: audit: type=1105 audit(1769564361.610:621): pid=4721 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:21.744936 kernel: audit: type=1103 audit(1769564361.674:622): pid=4725 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:21.674000 audit[4725]: CRED_ACQ pid=4725 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:23.556509 sshd[4725]: Connection closed by 10.0.0.1 port 35802 Jan 28 01:39:23.558417 sshd-session[4721]: pam_unix(sshd:session): session closed for user core Jan 28 01:39:23.661977 kernel: audit: type=1106 audit(1769564363.563:623): pid=4721 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:23.729034 kernel: audit: type=1104 audit(1769564363.564:624): pid=4721 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:23.563000 audit[4721]: USER_END pid=4721 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:23.564000 audit[4721]: CRED_DISP pid=4721 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:23.588868 systemd[1]: sshd@8-10.0.0.88:22-10.0.0.1:35802.service: Deactivated successfully. Jan 28 01:39:23.591612 systemd-logind[1594]: Session 9 logged out. Waiting for processes to exit. Jan 28 01:39:23.672771 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 01:39:23.725695 systemd-logind[1594]: Removed session 9. Jan 28 01:39:23.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.88:22-10.0.0.1:35802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:39:24.391735 kubelet[2938]: E0128 01:39:24.388762 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:39:27.405453 containerd[1612]: time="2026-01-28T01:39:27.392016040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9656966bd-lm4hd,Uid:e39d0520-24ca-4f27-b501-b31974cc3332,Namespace:calico-system,Attempt:0,}" Jan 28 01:39:27.418612 kubelet[2938]: E0128 01:39:27.405080 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:39:27.439031 containerd[1612]: time="2026-01-28T01:39:27.436548501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6cpzt,Uid:f0234091-1bfb-4c2b-914c-35e344cefc9d,Namespace:kube-system,Attempt:0,}" Jan 28 01:39:27.948525 containerd[1612]: time="2026-01-28T01:39:27.944974986Z" level=error msg="Failed to destroy network for sandbox \"e13f9fac82002ac262158bc35f402b866ccc4ec62e5d0f65fabc85c9ed087484\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:27.976042 systemd[1]: run-netns-cni\x2d70eb158a\x2d42fd\x2d832d\x2d7878\x2de63a5cdd5b5b.mount: Deactivated successfully. Jan 28 01:39:27.985420 containerd[1612]: time="2026-01-28T01:39:27.982452542Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9656966bd-lm4hd,Uid:e39d0520-24ca-4f27-b501-b31974cc3332,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e13f9fac82002ac262158bc35f402b866ccc4ec62e5d0f65fabc85c9ed087484\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:27.985659 kubelet[2938]: E0128 01:39:27.982782 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e13f9fac82002ac262158bc35f402b866ccc4ec62e5d0f65fabc85c9ed087484\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:27.985659 kubelet[2938]: E0128 01:39:27.982873 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e13f9fac82002ac262158bc35f402b866ccc4ec62e5d0f65fabc85c9ed087484\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9656966bd-lm4hd" Jan 28 01:39:27.985659 kubelet[2938]: E0128 01:39:27.982911 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e13f9fac82002ac262158bc35f402b866ccc4ec62e5d0f65fabc85c9ed087484\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9656966bd-lm4hd" Jan 28 01:39:27.985881 kubelet[2938]: E0128 01:39:27.982972 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-9656966bd-lm4hd_calico-system(e39d0520-24ca-4f27-b501-b31974cc3332)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-9656966bd-lm4hd_calico-system(e39d0520-24ca-4f27-b501-b31974cc3332)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e13f9fac82002ac262158bc35f402b866ccc4ec62e5d0f65fabc85c9ed087484\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-9656966bd-lm4hd" podUID="e39d0520-24ca-4f27-b501-b31974cc3332" Jan 28 01:39:28.264827 containerd[1612]: time="2026-01-28T01:39:28.262750928Z" level=error msg="Failed to destroy network for sandbox \"cf946c1162404bb6ca8503f4f51fca9d3554cab3b15d7e546d70291eb87ed8d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:28.293027 systemd[1]: run-netns-cni\x2d3513c3ae\x2d46ba\x2dc5f0\x2d2d19\x2d982a8e7be4c4.mount: Deactivated successfully. Jan 28 01:39:28.307707 containerd[1612]: time="2026-01-28T01:39:28.306787424Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6cpzt,Uid:f0234091-1bfb-4c2b-914c-35e344cefc9d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf946c1162404bb6ca8503f4f51fca9d3554cab3b15d7e546d70291eb87ed8d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:28.307922 kubelet[2938]: E0128 01:39:28.306958 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf946c1162404bb6ca8503f4f51fca9d3554cab3b15d7e546d70291eb87ed8d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:28.307922 kubelet[2938]: E0128 01:39:28.307024 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf946c1162404bb6ca8503f4f51fca9d3554cab3b15d7e546d70291eb87ed8d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6cpzt" Jan 28 01:39:28.307922 kubelet[2938]: E0128 01:39:28.307060 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf946c1162404bb6ca8503f4f51fca9d3554cab3b15d7e546d70291eb87ed8d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6cpzt" Jan 28 01:39:28.308153 kubelet[2938]: E0128 01:39:28.307190 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6cpzt_kube-system(f0234091-1bfb-4c2b-914c-35e344cefc9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6cpzt_kube-system(f0234091-1bfb-4c2b-914c-35e344cefc9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf946c1162404bb6ca8503f4f51fca9d3554cab3b15d7e546d70291eb87ed8d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6cpzt" podUID="f0234091-1bfb-4c2b-914c-35e344cefc9d" Jan 28 01:39:28.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.88:22-10.0.0.1:48436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:39:28.607851 systemd[1]: Started sshd@9-10.0.0.88:22-10.0.0.1:48436.service - OpenSSH per-connection server daemon (10.0.0.1:48436). Jan 28 01:39:28.649978 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:39:28.682935 kernel: audit: type=1130 audit(1769564368.607:626): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.88:22-10.0.0.1:48436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:39:29.015000 audit[4804]: USER_ACCT pid=4804 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:29.030467 sshd-session[4804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:39:29.036583 sshd[4804]: Accepted publickey for core from 10.0.0.1 port 48436 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:39:29.078816 kernel: audit: type=1101 audit(1769564369.015:627): pid=4804 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:29.081096 kernel: audit: type=1103 audit(1769564369.026:628): pid=4804 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:29.026000 audit[4804]: CRED_ACQ pid=4804 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:29.026000 audit[4804]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe48ca2d00 a2=3 a3=0 items=0 ppid=1 pid=4804 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:39:29.156151 kernel: audit: type=1006 audit(1769564369.026:629): pid=4804 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 28 01:39:29.162067 kernel: audit: type=1300 audit(1769564369.026:629): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe48ca2d00 a2=3 a3=0 items=0 ppid=1 pid=4804 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:39:29.162207 kernel: audit: type=1327 audit(1769564369.026:629): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:39:29.026000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:39:29.195720 systemd-logind[1594]: New session 10 of user core. Jan 28 01:39:29.223897 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 01:39:29.271000 audit[4804]: USER_START pid=4804 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:29.328921 kernel: audit: type=1105 audit(1769564369.271:630): pid=4804 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:29.329086 kernel: audit: type=1103 audit(1769564369.296:631): pid=4808 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:29.296000 audit[4808]: CRED_ACQ pid=4808 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:29.398213 containerd[1612]: time="2026-01-28T01:39:29.398036993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78b6655f44-dr84p,Uid:477c43dc-f740-4bfd-b59c-255fe52c8673,Namespace:calico-system,Attempt:0,}" Jan 28 01:39:29.414548 containerd[1612]: time="2026-01-28T01:39:29.412984344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-57jrt,Uid:df1949f7-cac3-4cf6-8c60-f8d963a49163,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:39:30.223052 sshd[4808]: Connection closed by 10.0.0.1 port 48436 Jan 28 01:39:30.222356 sshd-session[4804]: pam_unix(sshd:session): session closed for user core Jan 28 01:39:30.235000 audit[4804]: USER_END pid=4804 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:30.303543 systemd[1]: sshd@9-10.0.0.88:22-10.0.0.1:48436.service: Deactivated successfully. Jan 28 01:39:30.316101 kernel: audit: type=1106 audit(1769564370.235:632): pid=4804 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:30.331022 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 01:39:30.235000 audit[4804]: CRED_DISP pid=4804 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:30.380555 systemd-logind[1594]: Session 10 logged out. Waiting for processes to exit. Jan 28 01:39:30.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.88:22-10.0.0.1:48436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:39:30.382346 kernel: audit: type=1104 audit(1769564370.235:633): pid=4804 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:30.391051 systemd-logind[1594]: Removed session 10. Jan 28 01:39:30.472796 containerd[1612]: time="2026-01-28T01:39:30.471776716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4245v,Uid:a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef,Namespace:calico-system,Attempt:0,}" Jan 28 01:39:30.634680 containerd[1612]: time="2026-01-28T01:39:30.633613832Z" level=error msg="Failed to destroy network for sandbox \"e8b42db310c7e20d3af908368542d099a35b5f1a0732a3adc6762d8c4107da43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:30.671647 systemd[1]: run-netns-cni\x2d7608be54\x2dc78d\x2db850\x2d468d\x2d1509efb276c5.mount: Deactivated successfully. Jan 28 01:39:30.714563 containerd[1612]: time="2026-01-28T01:39:30.714503839Z" level=error msg="Failed to destroy network for sandbox \"ee70523fe50412cd9ebe01b8228b5a76425c01d5427992d9b38a23a55ddc5bae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:30.737361 systemd[1]: run-netns-cni\x2d4892f44a\x2d4a0b\x2d5189\x2de0a8\x2d6717349ab578.mount: Deactivated successfully. Jan 28 01:39:30.814762 containerd[1612]: time="2026-01-28T01:39:30.812028885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-57jrt,Uid:df1949f7-cac3-4cf6-8c60-f8d963a49163,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8b42db310c7e20d3af908368542d099a35b5f1a0732a3adc6762d8c4107da43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:30.909048 kubelet[2938]: E0128 01:39:30.896742 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8b42db310c7e20d3af908368542d099a35b5f1a0732a3adc6762d8c4107da43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:30.909048 kubelet[2938]: E0128 01:39:30.908020 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8b42db310c7e20d3af908368542d099a35b5f1a0732a3adc6762d8c4107da43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" Jan 28 01:39:30.909048 kubelet[2938]: E0128 01:39:30.908060 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8b42db310c7e20d3af908368542d099a35b5f1a0732a3adc6762d8c4107da43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" Jan 28 01:39:30.910090 kubelet[2938]: E0128 01:39:30.908328 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcc88c58b-57jrt_calico-apiserver(df1949f7-cac3-4cf6-8c60-f8d963a49163)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcc88c58b-57jrt_calico-apiserver(df1949f7-cac3-4cf6-8c60-f8d963a49163)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8b42db310c7e20d3af908368542d099a35b5f1a0732a3adc6762d8c4107da43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" podUID="df1949f7-cac3-4cf6-8c60-f8d963a49163" Jan 28 01:39:30.917183 containerd[1612]: time="2026-01-28T01:39:30.916812243Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78b6655f44-dr84p,Uid:477c43dc-f740-4bfd-b59c-255fe52c8673,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee70523fe50412cd9ebe01b8228b5a76425c01d5427992d9b38a23a55ddc5bae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:30.920633 kubelet[2938]: E0128 01:39:30.920374 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee70523fe50412cd9ebe01b8228b5a76425c01d5427992d9b38a23a55ddc5bae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:30.920633 kubelet[2938]: E0128 01:39:30.920481 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee70523fe50412cd9ebe01b8228b5a76425c01d5427992d9b38a23a55ddc5bae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" Jan 28 01:39:30.920633 kubelet[2938]: E0128 01:39:30.920514 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee70523fe50412cd9ebe01b8228b5a76425c01d5427992d9b38a23a55ddc5bae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" Jan 28 01:39:30.926149 kubelet[2938]: E0128 01:39:30.920572 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78b6655f44-dr84p_calico-system(477c43dc-f740-4bfd-b59c-255fe52c8673)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78b6655f44-dr84p_calico-system(477c43dc-f740-4bfd-b59c-255fe52c8673)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee70523fe50412cd9ebe01b8228b5a76425c01d5427992d9b38a23a55ddc5bae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" podUID="477c43dc-f740-4bfd-b59c-255fe52c8673" Jan 28 01:39:31.371775 containerd[1612]: time="2026-01-28T01:39:31.366522403Z" level=error msg="Failed to destroy network for sandbox \"f359fee8553a374e9774df622241e97af0101708fd47eccc6dc372dc52300d37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:31.380475 systemd[1]: run-netns-cni\x2d4ef52520\x2d8186\x2dad4c\x2d8035\x2dd6c2e768ab7b.mount: Deactivated successfully. Jan 28 01:39:31.394072 containerd[1612]: time="2026-01-28T01:39:31.394010342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-n2mcr,Uid:38764aa9-f6ea-4a8f-ac0e-198fa6f97144,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:39:31.456342 containerd[1612]: time="2026-01-28T01:39:31.455175346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4245v,Uid:a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f359fee8553a374e9774df622241e97af0101708fd47eccc6dc372dc52300d37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:31.463172 kubelet[2938]: E0128 01:39:31.459571 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f359fee8553a374e9774df622241e97af0101708fd47eccc6dc372dc52300d37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:31.463172 kubelet[2938]: E0128 01:39:31.459659 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f359fee8553a374e9774df622241e97af0101708fd47eccc6dc372dc52300d37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4245v" Jan 28 01:39:31.463172 kubelet[2938]: E0128 01:39:31.459688 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f359fee8553a374e9774df622241e97af0101708fd47eccc6dc372dc52300d37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4245v" Jan 28 01:39:31.463519 kubelet[2938]: E0128 01:39:31.459743 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4245v_calico-system(a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4245v_calico-system(a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f359fee8553a374e9774df622241e97af0101708fd47eccc6dc372dc52300d37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:39:31.996888 containerd[1612]: time="2026-01-28T01:39:31.996833547Z" level=error msg="Failed to destroy network for sandbox \"464d86e8e880985e706d4063dac4c97680d7547f8825ae413321b7c2544828de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:32.101881 systemd[1]: run-netns-cni\x2d5cf38585\x2d4bed\x2d8064\x2d474f\x2d897e157d34b7.mount: Deactivated successfully. Jan 28 01:39:32.115335 containerd[1612]: time="2026-01-28T01:39:32.113085181Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-n2mcr,Uid:38764aa9-f6ea-4a8f-ac0e-198fa6f97144,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"464d86e8e880985e706d4063dac4c97680d7547f8825ae413321b7c2544828de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:32.123405 kubelet[2938]: E0128 01:39:32.121455 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"464d86e8e880985e706d4063dac4c97680d7547f8825ae413321b7c2544828de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:32.123405 kubelet[2938]: E0128 01:39:32.121540 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"464d86e8e880985e706d4063dac4c97680d7547f8825ae413321b7c2544828de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" Jan 28 01:39:32.123405 kubelet[2938]: E0128 01:39:32.121569 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"464d86e8e880985e706d4063dac4c97680d7547f8825ae413321b7c2544828de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" Jan 28 01:39:32.123971 kubelet[2938]: E0128 01:39:32.121797 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcc88c58b-n2mcr_calico-apiserver(38764aa9-f6ea-4a8f-ac0e-198fa6f97144)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcc88c58b-n2mcr_calico-apiserver(38764aa9-f6ea-4a8f-ac0e-198fa6f97144)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"464d86e8e880985e706d4063dac4c97680d7547f8825ae413321b7c2544828de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" podUID="38764aa9-f6ea-4a8f-ac0e-198fa6f97144" Jan 28 01:39:32.378231 kubelet[2938]: E0128 01:39:32.373229 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:39:32.378231 kubelet[2938]: E0128 01:39:32.373883 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:39:32.378583 containerd[1612]: time="2026-01-28T01:39:32.375389000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2lrhs,Uid:d50b0b36-811a-467b-a5ed-e0483bb76784,Namespace:kube-system,Attempt:0,}" Jan 28 01:39:33.236365 containerd[1612]: time="2026-01-28T01:39:33.234559681Z" level=error msg="Failed to destroy network for sandbox \"9dd78a0c474dd3b2b7b5395c9fe227967d58d321141813c3d62e5281464b84ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:33.283513 systemd[1]: run-netns-cni\x2db79c2e70\x2d24aa\x2dbc05\x2de756\x2d84c5a07d9609.mount: Deactivated successfully. Jan 28 01:39:33.311836 containerd[1612]: time="2026-01-28T01:39:33.309533859Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2lrhs,Uid:d50b0b36-811a-467b-a5ed-e0483bb76784,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dd78a0c474dd3b2b7b5395c9fe227967d58d321141813c3d62e5281464b84ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:33.312950 kubelet[2938]: E0128 01:39:33.312657 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dd78a0c474dd3b2b7b5395c9fe227967d58d321141813c3d62e5281464b84ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:33.312950 kubelet[2938]: E0128 01:39:33.312736 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dd78a0c474dd3b2b7b5395c9fe227967d58d321141813c3d62e5281464b84ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2lrhs" Jan 28 01:39:33.312950 kubelet[2938]: E0128 01:39:33.312773 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dd78a0c474dd3b2b7b5395c9fe227967d58d321141813c3d62e5281464b84ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2lrhs" Jan 28 01:39:33.313632 kubelet[2938]: E0128 01:39:33.312831 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2lrhs_kube-system(d50b0b36-811a-467b-a5ed-e0483bb76784)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2lrhs_kube-system(d50b0b36-811a-467b-a5ed-e0483bb76784)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9dd78a0c474dd3b2b7b5395c9fe227967d58d321141813c3d62e5281464b84ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2lrhs" podUID="d50b0b36-811a-467b-a5ed-e0483bb76784" Jan 28 01:39:33.380380 containerd[1612]: time="2026-01-28T01:39:33.378590126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s28bh,Uid:55f83d8e-e337-4a1b-9dba-8df114668f11,Namespace:calico-system,Attempt:0,}" Jan 28 01:39:34.410367 containerd[1612]: time="2026-01-28T01:39:34.405801104Z" level=error msg="Failed to destroy network for sandbox \"5964d7832689bfd7e7a59268c26113ee55d17bf9aefb9edbde0e48a7e97e25f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:34.414738 systemd[1]: run-netns-cni\x2d209cf678\x2d9222\x2db502\x2da125\x2d07a04f27af29.mount: Deactivated successfully. Jan 28 01:39:34.523759 containerd[1612]: time="2026-01-28T01:39:34.522478901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s28bh,Uid:55f83d8e-e337-4a1b-9dba-8df114668f11,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5964d7832689bfd7e7a59268c26113ee55d17bf9aefb9edbde0e48a7e97e25f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:34.524022 kubelet[2938]: E0128 01:39:34.523036 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5964d7832689bfd7e7a59268c26113ee55d17bf9aefb9edbde0e48a7e97e25f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:34.532583 kubelet[2938]: E0128 01:39:34.523110 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5964d7832689bfd7e7a59268c26113ee55d17bf9aefb9edbde0e48a7e97e25f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-s28bh" Jan 28 01:39:34.532583 kubelet[2938]: E0128 01:39:34.525728 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5964d7832689bfd7e7a59268c26113ee55d17bf9aefb9edbde0e48a7e97e25f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-s28bh" Jan 28 01:39:34.547685 kubelet[2938]: E0128 01:39:34.538503 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-s28bh_calico-system(55f83d8e-e337-4a1b-9dba-8df114668f11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-s28bh_calico-system(55f83d8e-e337-4a1b-9dba-8df114668f11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5964d7832689bfd7e7a59268c26113ee55d17bf9aefb9edbde0e48a7e97e25f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-s28bh" podUID="55f83d8e-e337-4a1b-9dba-8df114668f11" Jan 28 01:39:35.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.88:22-10.0.0.1:41468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:39:35.314825 systemd[1]: Started sshd@10-10.0.0.88:22-10.0.0.1:41468.service - OpenSSH per-connection server daemon (10.0.0.1:41468). Jan 28 01:39:35.338828 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:39:35.339677 kernel: audit: type=1130 audit(1769564375.317:635): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.88:22-10.0.0.1:41468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:39:36.231000 audit[5008]: USER_ACCT pid=5008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:36.233865 sshd[5008]: Accepted publickey for core from 10.0.0.1 port 41468 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:39:36.283772 sshd-session[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:39:36.307717 kernel: audit: type=1101 audit(1769564376.231:636): pid=5008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:36.279000 audit[5008]: CRED_ACQ pid=5008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:36.352108 systemd-logind[1594]: New session 11 of user core. Jan 28 01:39:36.378825 kernel: audit: type=1103 audit(1769564376.279:637): pid=5008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:36.383373 kernel: audit: type=1006 audit(1769564376.279:638): pid=5008 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 28 01:39:36.383436 kernel: audit: type=1300 audit(1769564376.279:638): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe74b4ec10 a2=3 a3=0 items=0 ppid=1 pid=5008 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:39:36.279000 audit[5008]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe74b4ec10 a2=3 a3=0 items=0 ppid=1 pid=5008 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:39:36.424555 kernel: audit: type=1327 audit(1769564376.279:638): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:39:36.279000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:39:36.477610 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 01:39:36.531000 audit[5008]: USER_START pid=5008 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:36.557000 audit[5013]: CRED_ACQ pid=5013 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:36.630809 kernel: audit: type=1105 audit(1769564376.531:639): pid=5008 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:36.630926 kernel: audit: type=1103 audit(1769564376.557:640): pid=5013 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:37.658099 sshd[5013]: Connection closed by 10.0.0.1 port 41468 Jan 28 01:39:37.667551 sshd-session[5008]: pam_unix(sshd:session): session closed for user core Jan 28 01:39:37.687000 audit[5008]: USER_END pid=5008 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:37.708759 systemd[1]: sshd@10-10.0.0.88:22-10.0.0.1:41468.service: Deactivated successfully. Jan 28 01:39:37.725244 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 01:39:37.796538 kernel: audit: type=1106 audit(1769564377.687:641): pid=5008 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:37.811481 kernel: audit: type=1104 audit(1769564377.687:642): pid=5008 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:37.687000 audit[5008]: CRED_DISP pid=5008 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:37.809699 systemd-logind[1594]: Session 11 logged out. Waiting for processes to exit. Jan 28 01:39:37.825450 systemd-logind[1594]: Removed session 11. Jan 28 01:39:37.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.88:22-10.0.0.1:41468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:39:39.376937 containerd[1612]: time="2026-01-28T01:39:39.372250216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9656966bd-lm4hd,Uid:e39d0520-24ca-4f27-b501-b31974cc3332,Namespace:calico-system,Attempt:0,}" Jan 28 01:39:39.899752 containerd[1612]: time="2026-01-28T01:39:39.899048062Z" level=error msg="Failed to destroy network for sandbox \"cad59e47a62fd59e8a2ee95b0f73f6f14c13d5a9fa0f5d906d148f0472574dad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:39.907790 systemd[1]: run-netns-cni\x2d82ac7488\x2de4dc\x2d2202\x2d5950\x2dd6faf385bd6d.mount: Deactivated successfully. Jan 28 01:39:39.933466 containerd[1612]: time="2026-01-28T01:39:39.932364966Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9656966bd-lm4hd,Uid:e39d0520-24ca-4f27-b501-b31974cc3332,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cad59e47a62fd59e8a2ee95b0f73f6f14c13d5a9fa0f5d906d148f0472574dad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:39.935212 kubelet[2938]: E0128 01:39:39.933117 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cad59e47a62fd59e8a2ee95b0f73f6f14c13d5a9fa0f5d906d148f0472574dad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:39.935827 kubelet[2938]: E0128 01:39:39.935350 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cad59e47a62fd59e8a2ee95b0f73f6f14c13d5a9fa0f5d906d148f0472574dad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9656966bd-lm4hd" Jan 28 01:39:39.935827 kubelet[2938]: E0128 01:39:39.935389 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cad59e47a62fd59e8a2ee95b0f73f6f14c13d5a9fa0f5d906d148f0472574dad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9656966bd-lm4hd" Jan 28 01:39:39.935827 kubelet[2938]: E0128 01:39:39.935452 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-9656966bd-lm4hd_calico-system(e39d0520-24ca-4f27-b501-b31974cc3332)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-9656966bd-lm4hd_calico-system(e39d0520-24ca-4f27-b501-b31974cc3332)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cad59e47a62fd59e8a2ee95b0f73f6f14c13d5a9fa0f5d906d148f0472574dad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-9656966bd-lm4hd" podUID="e39d0520-24ca-4f27-b501-b31974cc3332" Jan 28 01:39:40.374209 kubelet[2938]: E0128 01:39:40.372695 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:39:40.400541 containerd[1612]: time="2026-01-28T01:39:40.397862315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6cpzt,Uid:f0234091-1bfb-4c2b-914c-35e344cefc9d,Namespace:kube-system,Attempt:0,}" Jan 28 01:39:40.814344 containerd[1612]: time="2026-01-28T01:39:40.809967919Z" level=error msg="Failed to destroy network for sandbox \"9125318373f350c70b008aab2ebf007b658d62bf6d011cfaad8788f97c82759f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:40.826777 systemd[1]: run-netns-cni\x2df6672282\x2dc512\x2d1b55\x2da0be\x2df793c517b8f6.mount: Deactivated successfully. Jan 28 01:39:40.840994 containerd[1612]: time="2026-01-28T01:39:40.839804457Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6cpzt,Uid:f0234091-1bfb-4c2b-914c-35e344cefc9d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9125318373f350c70b008aab2ebf007b658d62bf6d011cfaad8788f97c82759f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:40.841743 kubelet[2938]: E0128 01:39:40.840238 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9125318373f350c70b008aab2ebf007b658d62bf6d011cfaad8788f97c82759f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:40.841743 kubelet[2938]: E0128 01:39:40.840432 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9125318373f350c70b008aab2ebf007b658d62bf6d011cfaad8788f97c82759f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6cpzt" Jan 28 01:39:40.841743 kubelet[2938]: E0128 01:39:40.840468 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9125318373f350c70b008aab2ebf007b658d62bf6d011cfaad8788f97c82759f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6cpzt" Jan 28 01:39:40.841881 kubelet[2938]: E0128 01:39:40.840526 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6cpzt_kube-system(f0234091-1bfb-4c2b-914c-35e344cefc9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6cpzt_kube-system(f0234091-1bfb-4c2b-914c-35e344cefc9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9125318373f350c70b008aab2ebf007b658d62bf6d011cfaad8788f97c82759f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6cpzt" podUID="f0234091-1bfb-4c2b-914c-35e344cefc9d" Jan 28 01:39:41.809093 containerd[1612]: time="2026-01-28T01:39:41.808098957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78b6655f44-dr84p,Uid:477c43dc-f740-4bfd-b59c-255fe52c8673,Namespace:calico-system,Attempt:0,}" Jan 28 01:39:41.809093 containerd[1612]: time="2026-01-28T01:39:41.808398462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-57jrt,Uid:df1949f7-cac3-4cf6-8c60-f8d963a49163,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:39:42.774139 systemd[1]: Started sshd@11-10.0.0.88:22-10.0.0.1:44908.service - OpenSSH per-connection server daemon (10.0.0.1:44908). Jan 28 01:39:42.802728 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:39:42.802950 kernel: audit: type=1130 audit(1769564382.778:644): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.88:22-10.0.0.1:44908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:39:42.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.88:22-10.0.0.1:44908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:39:43.058724 containerd[1612]: time="2026-01-28T01:39:43.058443009Z" level=error msg="Failed to destroy network for sandbox \"081062ede6e2ff836462971c8f18d6595eb625ad6b74592df694b0969d5a6202\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:43.069414 systemd[1]: run-netns-cni\x2d27fe3771\x2d51a3\x2d6709\x2d2bc7\x2d958b49cc7ced.mount: Deactivated successfully. Jan 28 01:39:43.224605 containerd[1612]: time="2026-01-28T01:39:43.222714834Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-57jrt,Uid:df1949f7-cac3-4cf6-8c60-f8d963a49163,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"081062ede6e2ff836462971c8f18d6595eb625ad6b74592df694b0969d5a6202\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:43.224896 kubelet[2938]: E0128 01:39:43.223115 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"081062ede6e2ff836462971c8f18d6595eb625ad6b74592df694b0969d5a6202\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:43.224896 kubelet[2938]: E0128 01:39:43.223346 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"081062ede6e2ff836462971c8f18d6595eb625ad6b74592df694b0969d5a6202\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" Jan 28 01:39:43.224896 kubelet[2938]: E0128 01:39:43.223382 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"081062ede6e2ff836462971c8f18d6595eb625ad6b74592df694b0969d5a6202\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" Jan 28 01:39:43.225679 kubelet[2938]: E0128 01:39:43.223524 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcc88c58b-57jrt_calico-apiserver(df1949f7-cac3-4cf6-8c60-f8d963a49163)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcc88c58b-57jrt_calico-apiserver(df1949f7-cac3-4cf6-8c60-f8d963a49163)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"081062ede6e2ff836462971c8f18d6595eb625ad6b74592df694b0969d5a6202\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" podUID="df1949f7-cac3-4cf6-8c60-f8d963a49163" Jan 28 01:39:43.288573 containerd[1612]: time="2026-01-28T01:39:43.288509943Z" level=error msg="Failed to destroy network for sandbox \"2a56e2b7ec14fcab840def45e8a296e05917454a9135ea1fc973f12d2dbb7f54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:43.291991 systemd[1]: run-netns-cni\x2d01737732\x2d8399\x2d1e59\x2df922\x2da69fa2b583b0.mount: Deactivated successfully. Jan 28 01:39:43.349099 containerd[1612]: time="2026-01-28T01:39:43.347998523Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78b6655f44-dr84p,Uid:477c43dc-f740-4bfd-b59c-255fe52c8673,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a56e2b7ec14fcab840def45e8a296e05917454a9135ea1fc973f12d2dbb7f54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:43.350069 kubelet[2938]: E0128 01:39:43.349648 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a56e2b7ec14fcab840def45e8a296e05917454a9135ea1fc973f12d2dbb7f54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:43.350069 kubelet[2938]: E0128 01:39:43.349726 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a56e2b7ec14fcab840def45e8a296e05917454a9135ea1fc973f12d2dbb7f54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" Jan 28 01:39:43.350069 kubelet[2938]: E0128 01:39:43.349755 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a56e2b7ec14fcab840def45e8a296e05917454a9135ea1fc973f12d2dbb7f54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" Jan 28 01:39:43.350451 kubelet[2938]: E0128 01:39:43.349806 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78b6655f44-dr84p_calico-system(477c43dc-f740-4bfd-b59c-255fe52c8673)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78b6655f44-dr84p_calico-system(477c43dc-f740-4bfd-b59c-255fe52c8673)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a56e2b7ec14fcab840def45e8a296e05917454a9135ea1fc973f12d2dbb7f54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" podUID="477c43dc-f740-4bfd-b59c-255fe52c8673" Jan 28 01:39:43.407335 containerd[1612]: time="2026-01-28T01:39:43.406978275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4245v,Uid:a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef,Namespace:calico-system,Attempt:0,}" Jan 28 01:39:43.407865 containerd[1612]: time="2026-01-28T01:39:43.407795950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-n2mcr,Uid:38764aa9-f6ea-4a8f-ac0e-198fa6f97144,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:39:43.462000 audit[5140]: USER_ACCT pid=5140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:43.470984 sshd[5140]: Accepted publickey for core from 10.0.0.1 port 44908 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:39:43.475956 sshd-session[5140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:39:43.523512 kernel: audit: type=1101 audit(1769564383.462:645): pid=5140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:43.523638 kernel: audit: type=1103 audit(1769564383.470:646): pid=5140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:43.470000 audit[5140]: CRED_ACQ pid=5140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:43.573932 systemd-logind[1594]: New session 12 of user core. Jan 28 01:39:43.584364 kernel: audit: type=1006 audit(1769564383.474:647): pid=5140 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 28 01:39:43.474000 audit[5140]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea622d960 a2=3 a3=0 items=0 ppid=1 pid=5140 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:39:43.642880 kernel: audit: type=1300 audit(1769564383.474:647): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea622d960 a2=3 a3=0 items=0 ppid=1 pid=5140 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:39:43.643018 kernel: audit: type=1327 audit(1769564383.474:647): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:39:43.474000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:39:43.682741 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 01:39:43.712000 audit[5140]: USER_START pid=5140 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:43.800354 kernel: audit: type=1105 audit(1769564383.712:648): pid=5140 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:43.800482 kernel: audit: type=1103 audit(1769564383.728:649): pid=5187 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:43.728000 audit[5187]: CRED_ACQ pid=5187 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:44.271481 containerd[1612]: time="2026-01-28T01:39:44.271421253Z" level=error msg="Failed to destroy network for sandbox \"0ecf460fcf338f8b23c08d774be0ef3dab265150ecbedc7a3dff1e97eca660a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:44.295815 systemd[1]: run-netns-cni\x2df5ba4ba8\x2dc1c1\x2d133f\x2d8520\x2d8d87e5647aeb.mount: Deactivated successfully. Jan 28 01:39:44.334502 containerd[1612]: time="2026-01-28T01:39:44.334433760Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-n2mcr,Uid:38764aa9-f6ea-4a8f-ac0e-198fa6f97144,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ecf460fcf338f8b23c08d774be0ef3dab265150ecbedc7a3dff1e97eca660a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:44.338477 kubelet[2938]: E0128 01:39:44.336466 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ecf460fcf338f8b23c08d774be0ef3dab265150ecbedc7a3dff1e97eca660a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:44.338477 kubelet[2938]: E0128 01:39:44.336563 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ecf460fcf338f8b23c08d774be0ef3dab265150ecbedc7a3dff1e97eca660a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" Jan 28 01:39:44.338477 kubelet[2938]: E0128 01:39:44.336597 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ecf460fcf338f8b23c08d774be0ef3dab265150ecbedc7a3dff1e97eca660a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" Jan 28 01:39:44.340556 kubelet[2938]: E0128 01:39:44.336710 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcc88c58b-n2mcr_calico-apiserver(38764aa9-f6ea-4a8f-ac0e-198fa6f97144)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcc88c58b-n2mcr_calico-apiserver(38764aa9-f6ea-4a8f-ac0e-198fa6f97144)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ecf460fcf338f8b23c08d774be0ef3dab265150ecbedc7a3dff1e97eca660a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" podUID="38764aa9-f6ea-4a8f-ac0e-198fa6f97144" Jan 28 01:39:44.483392 containerd[1612]: time="2026-01-28T01:39:44.479662398Z" level=error msg="Failed to destroy network for sandbox \"8d73a8780c894ee192843db8ee8d6698e139e27bf0c6f858fd4a901431b4ae66\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:44.497217 systemd[1]: run-netns-cni\x2da0ae2ee0\x2dd914\x2d652e\x2d9cf1\x2d69a80c09fa98.mount: Deactivated successfully. Jan 28 01:39:44.522391 containerd[1612]: time="2026-01-28T01:39:44.521880512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4245v,Uid:a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d73a8780c894ee192843db8ee8d6698e139e27bf0c6f858fd4a901431b4ae66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:44.526618 kubelet[2938]: E0128 01:39:44.522900 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d73a8780c894ee192843db8ee8d6698e139e27bf0c6f858fd4a901431b4ae66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:44.526873 kubelet[2938]: E0128 01:39:44.526839 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d73a8780c894ee192843db8ee8d6698e139e27bf0c6f858fd4a901431b4ae66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4245v" Jan 28 01:39:44.527008 kubelet[2938]: E0128 01:39:44.526978 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d73a8780c894ee192843db8ee8d6698e139e27bf0c6f858fd4a901431b4ae66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4245v" Jan 28 01:39:44.527509 kubelet[2938]: E0128 01:39:44.527467 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4245v_calico-system(a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4245v_calico-system(a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d73a8780c894ee192843db8ee8d6698e139e27bf0c6f858fd4a901431b4ae66\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:39:44.753350 sshd[5187]: Connection closed by 10.0.0.1 port 44908 Jan 28 01:39:44.755681 sshd-session[5140]: pam_unix(sshd:session): session closed for user core Jan 28 01:39:44.761000 audit[5140]: USER_END pid=5140 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:44.816314 systemd[1]: sshd@11-10.0.0.88:22-10.0.0.1:44908.service: Deactivated successfully. Jan 28 01:39:44.832375 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 01:39:44.857236 kernel: audit: type=1106 audit(1769564384.761:650): pid=5140 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:44.857434 kernel: audit: type=1104 audit(1769564384.791:651): pid=5140 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:44.791000 audit[5140]: CRED_DISP pid=5140 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:44.871437 systemd-logind[1594]: Session 12 logged out. Waiting for processes to exit. Jan 28 01:39:44.885628 systemd-logind[1594]: Removed session 12. Jan 28 01:39:44.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.88:22-10.0.0.1:44908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:39:46.381660 containerd[1612]: time="2026-01-28T01:39:46.373121701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s28bh,Uid:55f83d8e-e337-4a1b-9dba-8df114668f11,Namespace:calico-system,Attempt:0,}" Jan 28 01:39:47.130344 containerd[1612]: time="2026-01-28T01:39:47.122470827Z" level=error msg="Failed to destroy network for sandbox \"860bc0dfc48f79322e77a5bdfb6910a13e7b51c4720b26fc203fcb3d1ff698cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:47.140839 systemd[1]: run-netns-cni\x2da7e89199\x2daf07\x2d6528\x2d4397\x2db884e498d999.mount: Deactivated successfully. Jan 28 01:39:47.142913 containerd[1612]: time="2026-01-28T01:39:47.142860125Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s28bh,Uid:55f83d8e-e337-4a1b-9dba-8df114668f11,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"860bc0dfc48f79322e77a5bdfb6910a13e7b51c4720b26fc203fcb3d1ff698cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:47.143724 kubelet[2938]: E0128 01:39:47.143675 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"860bc0dfc48f79322e77a5bdfb6910a13e7b51c4720b26fc203fcb3d1ff698cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:47.149352 kubelet[2938]: E0128 01:39:47.144156 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"860bc0dfc48f79322e77a5bdfb6910a13e7b51c4720b26fc203fcb3d1ff698cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-s28bh" Jan 28 01:39:47.149352 kubelet[2938]: E0128 01:39:47.147605 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"860bc0dfc48f79322e77a5bdfb6910a13e7b51c4720b26fc203fcb3d1ff698cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-s28bh" Jan 28 01:39:47.149352 kubelet[2938]: E0128 01:39:47.147720 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-s28bh_calico-system(55f83d8e-e337-4a1b-9dba-8df114668f11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-s28bh_calico-system(55f83d8e-e337-4a1b-9dba-8df114668f11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"860bc0dfc48f79322e77a5bdfb6910a13e7b51c4720b26fc203fcb3d1ff698cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-s28bh" podUID="55f83d8e-e337-4a1b-9dba-8df114668f11" Jan 28 01:39:47.371473 kubelet[2938]: E0128 01:39:47.370789 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:39:47.379407 containerd[1612]: time="2026-01-28T01:39:47.375626448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2lrhs,Uid:d50b0b36-811a-467b-a5ed-e0483bb76784,Namespace:kube-system,Attempt:0,}" Jan 28 01:39:50.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.88:22-10.0.0.1:44920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:39:50.856769 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:39:50.857122 kernel: audit: type=1130 audit(1769564390.785:653): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.88:22-10.0.0.1:44920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:39:50.775929 systemd[1]: Started sshd@12-10.0.0.88:22-10.0.0.1:44920.service - OpenSSH per-connection server daemon (10.0.0.1:44920). Jan 28 01:39:51.562000 audit[5280]: USER_ACCT pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:51.657169 kernel: audit: type=1101 audit(1769564391.562:654): pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:51.672159 sshd[5280]: Accepted publickey for core from 10.0.0.1 port 44920 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:39:51.673000 audit[5280]: CRED_ACQ pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:51.719823 sshd-session[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:39:51.759999 kernel: audit: type=1103 audit(1769564391.673:655): pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:51.800894 kernel: audit: type=1006 audit(1769564391.674:656): pid=5280 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 28 01:39:51.803991 kernel: audit: type=1300 audit(1769564391.674:656): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd74595ba0 a2=3 a3=0 items=0 ppid=1 pid=5280 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:39:51.804243 kernel: audit: type=1327 audit(1769564391.674:656): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:39:51.674000 audit[5280]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd74595ba0 a2=3 a3=0 items=0 ppid=1 pid=5280 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:39:51.674000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:39:51.982517 systemd-logind[1594]: New session 13 of user core. Jan 28 01:39:52.005591 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 01:39:52.033000 audit[5280]: USER_START pid=5280 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:52.090953 kernel: audit: type=1105 audit(1769564392.033:657): pid=5280 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:52.128000 audit[5304]: CRED_ACQ pid=5304 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:52.152092 containerd[1612]: time="2026-01-28T01:39:52.152018476Z" level=error msg="Failed to destroy network for sandbox \"b56468a8108fbd469602bfdd2c62f3aeff3626569cfe81bf996001fd9267cb06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:52.193559 kernel: audit: type=1103 audit(1769564392.128:658): pid=5304 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:52.214621 systemd[1]: run-netns-cni\x2d4e34048e\x2d67f5\x2d1b7a\x2dbf83\x2df35a70663251.mount: Deactivated successfully. Jan 28 01:39:52.325687 containerd[1612]: time="2026-01-28T01:39:52.323543356Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2lrhs,Uid:d50b0b36-811a-467b-a5ed-e0483bb76784,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b56468a8108fbd469602bfdd2c62f3aeff3626569cfe81bf996001fd9267cb06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:52.333849 kubelet[2938]: E0128 01:39:52.333714 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b56468a8108fbd469602bfdd2c62f3aeff3626569cfe81bf996001fd9267cb06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:52.336623 kubelet[2938]: E0128 01:39:52.334356 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b56468a8108fbd469602bfdd2c62f3aeff3626569cfe81bf996001fd9267cb06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2lrhs" Jan 28 01:39:52.336623 kubelet[2938]: E0128 01:39:52.334398 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b56468a8108fbd469602bfdd2c62f3aeff3626569cfe81bf996001fd9267cb06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2lrhs" Jan 28 01:39:52.337610 kubelet[2938]: E0128 01:39:52.335770 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2lrhs_kube-system(d50b0b36-811a-467b-a5ed-e0483bb76784)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2lrhs_kube-system(d50b0b36-811a-467b-a5ed-e0483bb76784)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b56468a8108fbd469602bfdd2c62f3aeff3626569cfe81bf996001fd9267cb06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2lrhs" podUID="d50b0b36-811a-467b-a5ed-e0483bb76784" Jan 28 01:39:53.093663 sshd[5304]: Connection closed by 10.0.0.1 port 44920 Jan 28 01:39:53.103489 sshd-session[5280]: pam_unix(sshd:session): session closed for user core Jan 28 01:39:53.122000 audit[5280]: USER_END pid=5280 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:53.134411 systemd[1]: sshd@12-10.0.0.88:22-10.0.0.1:44920.service: Deactivated successfully. Jan 28 01:39:53.138836 systemd-logind[1594]: Session 13 logged out. Waiting for processes to exit. Jan 28 01:39:53.154556 kernel: audit: type=1106 audit(1769564393.122:659): pid=5280 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:53.154642 kernel: audit: type=1104 audit(1769564393.122:660): pid=5280 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:53.122000 audit[5280]: CRED_DISP pid=5280 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:53.148745 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 01:39:53.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.88:22-10.0.0.1:44920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:39:53.178758 systemd-logind[1594]: Removed session 13. Jan 28 01:39:53.401711 kubelet[2938]: E0128 01:39:53.393608 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:39:53.408505 containerd[1612]: time="2026-01-28T01:39:53.404953421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6cpzt,Uid:f0234091-1bfb-4c2b-914c-35e344cefc9d,Namespace:kube-system,Attempt:0,}" Jan 28 01:39:54.154746 containerd[1612]: time="2026-01-28T01:39:54.107773379Z" level=error msg="Failed to destroy network for sandbox \"cc051510230852e1ddc2be7fb3653ef1c9ee12ff131da79687b6b45b144cbe00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:54.112134 systemd[1]: run-netns-cni\x2dc137dd35\x2d9bd6\x2d2c04\x2d21d3\x2d71b307ea0ace.mount: Deactivated successfully. Jan 28 01:39:54.188588 containerd[1612]: time="2026-01-28T01:39:54.187961787Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6cpzt,Uid:f0234091-1bfb-4c2b-914c-35e344cefc9d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc051510230852e1ddc2be7fb3653ef1c9ee12ff131da79687b6b45b144cbe00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:54.196490 kubelet[2938]: E0128 01:39:54.192883 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc051510230852e1ddc2be7fb3653ef1c9ee12ff131da79687b6b45b144cbe00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:54.196490 kubelet[2938]: E0128 01:39:54.193100 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc051510230852e1ddc2be7fb3653ef1c9ee12ff131da79687b6b45b144cbe00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6cpzt" Jan 28 01:39:54.196490 kubelet[2938]: E0128 01:39:54.193139 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc051510230852e1ddc2be7fb3653ef1c9ee12ff131da79687b6b45b144cbe00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6cpzt" Jan 28 01:39:54.198013 kubelet[2938]: E0128 01:39:54.194555 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6cpzt_kube-system(f0234091-1bfb-4c2b-914c-35e344cefc9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6cpzt_kube-system(f0234091-1bfb-4c2b-914c-35e344cefc9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc051510230852e1ddc2be7fb3653ef1c9ee12ff131da79687b6b45b144cbe00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6cpzt" podUID="f0234091-1bfb-4c2b-914c-35e344cefc9d" Jan 28 01:39:54.387868 kubelet[2938]: E0128 01:39:54.381847 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:39:54.388397 containerd[1612]: time="2026-01-28T01:39:54.385067672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9656966bd-lm4hd,Uid:e39d0520-24ca-4f27-b501-b31974cc3332,Namespace:calico-system,Attempt:0,}" Jan 28 01:39:54.395620 containerd[1612]: time="2026-01-28T01:39:54.395548392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-57jrt,Uid:df1949f7-cac3-4cf6-8c60-f8d963a49163,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:39:55.019701 containerd[1612]: time="2026-01-28T01:39:55.015443394Z" level=error msg="Failed to destroy network for sandbox \"ac6707dbdf23e66b67f545fd208a8ae8fb59b50f46b4655e45e444912e36da85\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:55.038910 systemd[1]: run-netns-cni\x2d59315d97\x2ddd84\x2daba1\x2d48f4\x2d095779f9e807.mount: Deactivated successfully. Jan 28 01:39:55.084774 containerd[1612]: time="2026-01-28T01:39:55.083873362Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-57jrt,Uid:df1949f7-cac3-4cf6-8c60-f8d963a49163,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac6707dbdf23e66b67f545fd208a8ae8fb59b50f46b4655e45e444912e36da85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:55.090385 kubelet[2938]: E0128 01:39:55.089090 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac6707dbdf23e66b67f545fd208a8ae8fb59b50f46b4655e45e444912e36da85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:55.090924 kubelet[2938]: E0128 01:39:55.090423 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac6707dbdf23e66b67f545fd208a8ae8fb59b50f46b4655e45e444912e36da85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" Jan 28 01:39:55.090924 kubelet[2938]: E0128 01:39:55.090588 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac6707dbdf23e66b67f545fd208a8ae8fb59b50f46b4655e45e444912e36da85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" Jan 28 01:39:55.092638 kubelet[2938]: E0128 01:39:55.091069 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcc88c58b-57jrt_calico-apiserver(df1949f7-cac3-4cf6-8c60-f8d963a49163)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcc88c58b-57jrt_calico-apiserver(df1949f7-cac3-4cf6-8c60-f8d963a49163)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac6707dbdf23e66b67f545fd208a8ae8fb59b50f46b4655e45e444912e36da85\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" podUID="df1949f7-cac3-4cf6-8c60-f8d963a49163" Jan 28 01:39:55.198398 containerd[1612]: time="2026-01-28T01:39:55.198037070Z" level=error msg="Failed to destroy network for sandbox \"7ce350d1d37ecf3c2e6186c8752ee183c9c63d2b105d406664ab4d4086f81018\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:55.217393 containerd[1612]: time="2026-01-28T01:39:55.217152335Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9656966bd-lm4hd,Uid:e39d0520-24ca-4f27-b501-b31974cc3332,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ce350d1d37ecf3c2e6186c8752ee183c9c63d2b105d406664ab4d4086f81018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:55.222817 kubelet[2938]: E0128 01:39:55.222765 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ce350d1d37ecf3c2e6186c8752ee183c9c63d2b105d406664ab4d4086f81018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:55.224694 kubelet[2938]: E0128 01:39:55.223090 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ce350d1d37ecf3c2e6186c8752ee183c9c63d2b105d406664ab4d4086f81018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9656966bd-lm4hd" Jan 28 01:39:55.224694 kubelet[2938]: E0128 01:39:55.223129 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ce350d1d37ecf3c2e6186c8752ee183c9c63d2b105d406664ab4d4086f81018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9656966bd-lm4hd" Jan 28 01:39:55.227144 kubelet[2938]: E0128 01:39:55.223179 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-9656966bd-lm4hd_calico-system(e39d0520-24ca-4f27-b501-b31974cc3332)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-9656966bd-lm4hd_calico-system(e39d0520-24ca-4f27-b501-b31974cc3332)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ce350d1d37ecf3c2e6186c8752ee183c9c63d2b105d406664ab4d4086f81018\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-9656966bd-lm4hd" podUID="e39d0520-24ca-4f27-b501-b31974cc3332" Jan 28 01:39:55.236414 systemd[1]: run-netns-cni\x2d6ff965c1\x2d5920\x2de7a5\x2d1f90\x2d6a5696a890b1.mount: Deactivated successfully. Jan 28 01:39:55.732964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3325306008.mount: Deactivated successfully. Jan 28 01:39:55.906329 containerd[1612]: time="2026-01-28T01:39:55.905762734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:39:55.912941 containerd[1612]: time="2026-01-28T01:39:55.912448089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 28 01:39:55.918575 containerd[1612]: time="2026-01-28T01:39:55.918430612Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:39:55.939343 containerd[1612]: time="2026-01-28T01:39:55.938016838Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:39:55.939852 containerd[1612]: time="2026-01-28T01:39:55.939474145Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 1m28.87799228s" Jan 28 01:39:55.939852 containerd[1612]: time="2026-01-28T01:39:55.939536382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 28 01:39:55.993695 containerd[1612]: time="2026-01-28T01:39:55.992796384Z" level=info msg="CreateContainer within sandbox \"2f42d9fe6c6670e570b6233f6cbcc960576781ed1cbfde21da1a0a8756ad1352\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 28 01:39:56.091437 containerd[1612]: time="2026-01-28T01:39:56.091118403Z" level=info msg="Container b04cf42546525d6c69f4b94558c1adda50d97347d7ea833009a5490d9f28fbd4: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:39:56.137037 containerd[1612]: time="2026-01-28T01:39:56.136828817Z" level=info msg="CreateContainer within sandbox \"2f42d9fe6c6670e570b6233f6cbcc960576781ed1cbfde21da1a0a8756ad1352\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b04cf42546525d6c69f4b94558c1adda50d97347d7ea833009a5490d9f28fbd4\"" Jan 28 01:39:56.141130 containerd[1612]: time="2026-01-28T01:39:56.140735194Z" level=info msg="StartContainer for \"b04cf42546525d6c69f4b94558c1adda50d97347d7ea833009a5490d9f28fbd4\"" Jan 28 01:39:56.166872 containerd[1612]: time="2026-01-28T01:39:56.165147592Z" level=info msg="connecting to shim b04cf42546525d6c69f4b94558c1adda50d97347d7ea833009a5490d9f28fbd4" address="unix:///run/containerd/s/dced5e368dde98fea6ebe6905d4efb99c44003bcde6edf4bbdf5cb482e4bb8f6" protocol=ttrpc version=3 Jan 28 01:39:56.375072 containerd[1612]: time="2026-01-28T01:39:56.374036700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-n2mcr,Uid:38764aa9-f6ea-4a8f-ac0e-198fa6f97144,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:39:56.415101 systemd[1]: Started cri-containerd-b04cf42546525d6c69f4b94558c1adda50d97347d7ea833009a5490d9f28fbd4.scope - libcontainer container b04cf42546525d6c69f4b94558c1adda50d97347d7ea833009a5490d9f28fbd4. Jan 28 01:39:56.600000 audit: BPF prog-id=184 op=LOAD Jan 28 01:39:56.630445 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:39:56.630582 kernel: audit: type=1334 audit(1769564396.600:662): prog-id=184 op=LOAD Jan 28 01:39:56.600000 audit[5425]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001c0488 a2=98 a3=0 items=0 ppid=3651 pid=5425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:39:56.683659 kernel: audit: type=1300 audit(1769564396.600:662): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001c0488 a2=98 a3=0 items=0 ppid=3651 pid=5425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:39:56.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230346366343235343635323564366336396634623934353538633161 Jan 28 01:39:56.823928 kernel: audit: type=1327 audit(1769564396.600:662): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230346366343235343635323564366336396634623934353538633161 Jan 28 01:39:56.607000 audit: BPF prog-id=185 op=LOAD Jan 28 01:39:56.843459 kernel: audit: type=1334 audit(1769564396.607:663): prog-id=185 op=LOAD Jan 28 01:39:56.607000 audit[5425]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001c0218 a2=98 a3=0 items=0 ppid=3651 pid=5425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:39:56.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230346366343235343635323564366336396634623934353538633161 Jan 28 01:39:56.960401 kernel: audit: type=1300 audit(1769564396.607:663): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001c0218 a2=98 a3=0 items=0 ppid=3651 pid=5425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:39:56.960619 kernel: audit: type=1327 audit(1769564396.607:663): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230346366343235343635323564366336396634623934353538633161 Jan 28 01:39:56.960661 kernel: audit: type=1334 audit(1769564396.607:664): prog-id=185 op=UNLOAD Jan 28 01:39:56.607000 audit: BPF prog-id=185 op=UNLOAD Jan 28 01:39:56.969165 containerd[1612]: time="2026-01-28T01:39:56.968873052Z" level=error msg="Failed to destroy network for sandbox \"c85830fae76f8bb2efe1b5283761472defa710c2baec8b1b0566b61ee44a4afb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:56.974131 systemd[1]: run-netns-cni\x2d36f38e8c\x2d51f1\x2d2328\x2d274f\x2dc307e6752109.mount: Deactivated successfully. Jan 28 01:39:57.007559 kernel: audit: type=1300 audit(1769564396.607:664): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3651 pid=5425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:39:56.607000 audit[5425]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3651 pid=5425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:39:56.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230346366343235343635323564366336396634623934353538633161 Jan 28 01:39:57.068091 kernel: audit: type=1327 audit(1769564396.607:664): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230346366343235343635323564366336396634623934353538633161 Jan 28 01:39:57.071655 kernel: audit: type=1334 audit(1769564396.607:665): prog-id=184 op=UNLOAD Jan 28 01:39:56.607000 audit: BPF prog-id=184 op=UNLOAD Jan 28 01:39:56.607000 audit[5425]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3651 pid=5425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:39:56.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230346366343235343635323564366336396634623934353538633161 Jan 28 01:39:56.607000 audit: BPF prog-id=186 op=LOAD Jan 28 01:39:56.607000 audit[5425]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001c06e8 a2=98 a3=0 items=0 ppid=3651 pid=5425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:39:56.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230346366343235343635323564366336396634623934353538633161 Jan 28 01:39:57.136935 containerd[1612]: time="2026-01-28T01:39:57.136819416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-n2mcr,Uid:38764aa9-f6ea-4a8f-ac0e-198fa6f97144,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c85830fae76f8bb2efe1b5283761472defa710c2baec8b1b0566b61ee44a4afb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:57.144103 kubelet[2938]: E0128 01:39:57.143555 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c85830fae76f8bb2efe1b5283761472defa710c2baec8b1b0566b61ee44a4afb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:57.144103 kubelet[2938]: E0128 01:39:57.143639 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c85830fae76f8bb2efe1b5283761472defa710c2baec8b1b0566b61ee44a4afb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" Jan 28 01:39:57.144103 kubelet[2938]: E0128 01:39:57.143676 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c85830fae76f8bb2efe1b5283761472defa710c2baec8b1b0566b61ee44a4afb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" Jan 28 01:39:57.144849 kubelet[2938]: E0128 01:39:57.143749 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcc88c58b-n2mcr_calico-apiserver(38764aa9-f6ea-4a8f-ac0e-198fa6f97144)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcc88c58b-n2mcr_calico-apiserver(38764aa9-f6ea-4a8f-ac0e-198fa6f97144)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c85830fae76f8bb2efe1b5283761472defa710c2baec8b1b0566b61ee44a4afb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" podUID="38764aa9-f6ea-4a8f-ac0e-198fa6f97144" Jan 28 01:39:57.157193 containerd[1612]: time="2026-01-28T01:39:57.154683951Z" level=info msg="StartContainer for \"b04cf42546525d6c69f4b94558c1adda50d97347d7ea833009a5490d9f28fbd4\" returns successfully" Jan 28 01:39:57.428376 kubelet[2938]: E0128 01:39:57.428083 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:39:57.435541 containerd[1612]: time="2026-01-28T01:39:57.433837383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78b6655f44-dr84p,Uid:477c43dc-f740-4bfd-b59c-255fe52c8673,Namespace:calico-system,Attempt:0,}" Jan 28 01:39:57.539946 kubelet[2938]: I0128 01:39:57.538251 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dwj7n" podStartSLOduration=8.936541668 podStartE2EDuration="2m36.538135741s" podCreationTimestamp="2026-01-28 01:37:21 +0000 UTC" firstStartedPulling="2026-01-28 01:37:28.354080326 +0000 UTC m=+139.115553211" lastFinishedPulling="2026-01-28 01:39:55.955674398 +0000 UTC m=+286.717147284" observedRunningTime="2026-01-28 01:39:57.535861241 +0000 UTC m=+288.297334146" watchObservedRunningTime="2026-01-28 01:39:57.538135741 +0000 UTC m=+288.299608625" Jan 28 01:39:57.716790 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 28 01:39:57.716911 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 28 01:39:57.818421 containerd[1612]: time="2026-01-28T01:39:57.818172303Z" level=error msg="Failed to destroy network for sandbox \"40143776a004487ca10674c5de71ba3f3de9f4f8f7fbff8171ab4d28834b81cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:57.826355 systemd[1]: run-netns-cni\x2d1919bd47\x2d8414\x2dc18e\x2dc837\x2d7a993ad266e1.mount: Deactivated successfully. Jan 28 01:39:57.830462 kubelet[2938]: E0128 01:39:57.827960 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40143776a004487ca10674c5de71ba3f3de9f4f8f7fbff8171ab4d28834b81cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:57.830462 kubelet[2938]: E0128 01:39:57.828050 2938 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40143776a004487ca10674c5de71ba3f3de9f4f8f7fbff8171ab4d28834b81cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" Jan 28 01:39:57.830462 kubelet[2938]: E0128 01:39:57.828084 2938 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40143776a004487ca10674c5de71ba3f3de9f4f8f7fbff8171ab4d28834b81cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" Jan 28 01:39:57.830714 containerd[1612]: time="2026-01-28T01:39:57.826527836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78b6655f44-dr84p,Uid:477c43dc-f740-4bfd-b59c-255fe52c8673,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"40143776a004487ca10674c5de71ba3f3de9f4f8f7fbff8171ab4d28834b81cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:39:57.830895 kubelet[2938]: E0128 01:39:57.828148 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78b6655f44-dr84p_calico-system(477c43dc-f740-4bfd-b59c-255fe52c8673)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78b6655f44-dr84p_calico-system(477c43dc-f740-4bfd-b59c-255fe52c8673)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40143776a004487ca10674c5de71ba3f3de9f4f8f7fbff8171ab4d28834b81cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" podUID="477c43dc-f740-4bfd-b59c-255fe52c8673" Jan 28 01:39:58.146785 systemd[1]: Started sshd@13-10.0.0.88:22-10.0.0.1:42606.service - OpenSSH per-connection server daemon (10.0.0.1:42606). Jan 28 01:39:58.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.88:22-10.0.0.1:42606 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:39:58.525971 kubelet[2938]: E0128 01:39:58.524837 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:39:58.575000 audit[5552]: USER_ACCT pid=5552 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:58.582181 sshd[5552]: Accepted publickey for core from 10.0.0.1 port 42606 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:39:58.597000 audit[5552]: CRED_ACQ pid=5552 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:58.597000 audit[5552]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7a8a1ad0 a2=3 a3=0 items=0 ppid=1 pid=5552 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:39:58.597000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:39:58.608470 sshd-session[5552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:39:58.688571 systemd-logind[1594]: New session 14 of user core. Jan 28 01:39:58.731368 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 01:39:58.779000 audit[5552]: USER_START pid=5552 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:58.782003 kubelet[2938]: I0128 01:39:58.778626 2938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e39d0520-24ca-4f27-b501-b31974cc3332-whisker-backend-key-pair\") pod \"e39d0520-24ca-4f27-b501-b31974cc3332\" (UID: \"e39d0520-24ca-4f27-b501-b31974cc3332\") " Jan 28 01:39:58.782003 kubelet[2938]: I0128 01:39:58.779853 2938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj7j8\" (UniqueName: \"kubernetes.io/projected/e39d0520-24ca-4f27-b501-b31974cc3332-kube-api-access-sj7j8\") pod \"e39d0520-24ca-4f27-b501-b31974cc3332\" (UID: \"e39d0520-24ca-4f27-b501-b31974cc3332\") " Jan 28 01:39:58.782003 kubelet[2938]: I0128 01:39:58.779977 2938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e39d0520-24ca-4f27-b501-b31974cc3332-whisker-ca-bundle\") pod \"e39d0520-24ca-4f27-b501-b31974cc3332\" (UID: \"e39d0520-24ca-4f27-b501-b31974cc3332\") " Jan 28 01:39:58.782003 kubelet[2938]: I0128 01:39:58.781951 2938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e39d0520-24ca-4f27-b501-b31974cc3332-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e39d0520-24ca-4f27-b501-b31974cc3332" (UID: "e39d0520-24ca-4f27-b501-b31974cc3332"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 01:39:58.803941 systemd[1]: var-lib-kubelet-pods-e39d0520\x2d24ca\x2d4f27\x2db501\x2db31974cc3332-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsj7j8.mount: Deactivated successfully. Jan 28 01:39:58.809425 kubelet[2938]: I0128 01:39:58.809193 2938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e39d0520-24ca-4f27-b501-b31974cc3332-kube-api-access-sj7j8" (OuterVolumeSpecName: "kube-api-access-sj7j8") pod "e39d0520-24ca-4f27-b501-b31974cc3332" (UID: "e39d0520-24ca-4f27-b501-b31974cc3332"). InnerVolumeSpecName "kube-api-access-sj7j8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 01:39:58.811000 audit[5578]: CRED_ACQ pid=5578 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:58.814090 systemd[1]: var-lib-kubelet-pods-e39d0520\x2d24ca\x2d4f27\x2db501\x2db31974cc3332-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 28 01:39:58.816079 kubelet[2938]: I0128 01:39:58.814954 2938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e39d0520-24ca-4f27-b501-b31974cc3332-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e39d0520-24ca-4f27-b501-b31974cc3332" (UID: "e39d0520-24ca-4f27-b501-b31974cc3332"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 01:39:58.882649 kubelet[2938]: I0128 01:39:58.881887 2938 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e39d0520-24ca-4f27-b501-b31974cc3332-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 28 01:39:58.882649 kubelet[2938]: I0128 01:39:58.881930 2938 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sj7j8\" (UniqueName: \"kubernetes.io/projected/e39d0520-24ca-4f27-b501-b31974cc3332-kube-api-access-sj7j8\") on node \"localhost\" DevicePath \"\"" Jan 28 01:39:58.882649 kubelet[2938]: I0128 01:39:58.881944 2938 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e39d0520-24ca-4f27-b501-b31974cc3332-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 28 01:39:59.408362 containerd[1612]: time="2026-01-28T01:39:59.408053313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4245v,Uid:a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef,Namespace:calico-system,Attempt:0,}" Jan 28 01:39:59.502043 systemd[1]: Removed slice kubepods-besteffort-pode39d0520_24ca_4f27_b501_b31974cc3332.slice - libcontainer container kubepods-besteffort-pode39d0520_24ca_4f27_b501_b31974cc3332.slice. Jan 28 01:39:59.562127 sshd[5578]: Connection closed by 10.0.0.1 port 42606 Jan 28 01:39:59.562711 sshd-session[5552]: pam_unix(sshd:session): session closed for user core Jan 28 01:39:59.572000 audit[5552]: USER_END pid=5552 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:59.572000 audit[5552]: CRED_DISP pid=5552 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:39:59.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.88:22-10.0.0.1:42606 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:39:59.590723 systemd-logind[1594]: Session 14 logged out. Waiting for processes to exit. Jan 28 01:39:59.591621 systemd[1]: sshd@13-10.0.0.88:22-10.0.0.1:42606.service: Deactivated successfully. Jan 28 01:39:59.602743 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 01:39:59.611032 systemd-logind[1594]: Removed session 14. Jan 28 01:39:59.900412 systemd[1]: Created slice kubepods-besteffort-poda02c8935_a477_4f1a_ba8e_1d2c1d76c8e7.slice - libcontainer container kubepods-besteffort-poda02c8935_a477_4f1a_ba8e_1d2c1d76c8e7.slice. Jan 28 01:39:59.922911 kubelet[2938]: I0128 01:39:59.921745 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppmfm\" (UniqueName: \"kubernetes.io/projected/a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7-kube-api-access-ppmfm\") pod \"whisker-757b4d7df4-d9b2x\" (UID: \"a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7\") " pod="calico-system/whisker-757b4d7df4-d9b2x" Jan 28 01:39:59.922911 kubelet[2938]: I0128 01:39:59.921804 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7-whisker-backend-key-pair\") pod \"whisker-757b4d7df4-d9b2x\" (UID: \"a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7\") " pod="calico-system/whisker-757b4d7df4-d9b2x" Jan 28 01:39:59.922911 kubelet[2938]: I0128 01:39:59.921828 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7-whisker-ca-bundle\") pod \"whisker-757b4d7df4-d9b2x\" (UID: \"a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7\") " pod="calico-system/whisker-757b4d7df4-d9b2x" Jan 28 01:40:00.225854 containerd[1612]: time="2026-01-28T01:40:00.224506186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-757b4d7df4-d9b2x,Uid:a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7,Namespace:calico-system,Attempt:0,}" Jan 28 01:40:00.987740 systemd-networkd[1516]: cali195e0bfdd9c: Link UP Jan 28 01:40:01.016217 systemd-networkd[1516]: cali195e0bfdd9c: Gained carrier Jan 28 01:40:01.138794 containerd[1612]: 2026-01-28 01:39:59.697 [INFO][5603] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:40:01.138794 containerd[1612]: 2026-01-28 01:39:59.824 [INFO][5603] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--4245v-eth0 csi-node-driver- calico-system a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef 1009 0 2026-01-28 01:37:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-4245v eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali195e0bfdd9c [] [] }} ContainerID="2143789891136698b394dcd0cc6eb9493fd76d1afc908c5d7f3726af230426b1" Namespace="calico-system" Pod="csi-node-driver-4245v" WorkloadEndpoint="localhost-k8s-csi--node--driver--4245v-" Jan 28 01:40:01.138794 containerd[1612]: 2026-01-28 01:39:59.825 [INFO][5603] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2143789891136698b394dcd0cc6eb9493fd76d1afc908c5d7f3726af230426b1" Namespace="calico-system" Pod="csi-node-driver-4245v" WorkloadEndpoint="localhost-k8s-csi--node--driver--4245v-eth0" Jan 28 01:40:01.138794 containerd[1612]: 2026-01-28 01:40:00.580 [INFO][5621] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2143789891136698b394dcd0cc6eb9493fd76d1afc908c5d7f3726af230426b1" HandleID="k8s-pod-network.2143789891136698b394dcd0cc6eb9493fd76d1afc908c5d7f3726af230426b1" Workload="localhost-k8s-csi--node--driver--4245v-eth0" Jan 28 01:40:01.159835 containerd[1612]: 2026-01-28 01:40:00.583 [INFO][5621] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2143789891136698b394dcd0cc6eb9493fd76d1afc908c5d7f3726af230426b1" HandleID="k8s-pod-network.2143789891136698b394dcd0cc6eb9493fd76d1afc908c5d7f3726af230426b1" Workload="localhost-k8s-csi--node--driver--4245v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f0f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-4245v", "timestamp":"2026-01-28 01:40:00.580628034 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:40:01.159835 containerd[1612]: 2026-01-28 01:40:00.583 [INFO][5621] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:40:01.159835 containerd[1612]: 2026-01-28 01:40:00.585 [INFO][5621] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:40:01.159835 containerd[1612]: 2026-01-28 01:40:00.586 [INFO][5621] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:40:01.159835 containerd[1612]: 2026-01-28 01:40:00.619 [INFO][5621] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2143789891136698b394dcd0cc6eb9493fd76d1afc908c5d7f3726af230426b1" host="localhost" Jan 28 01:40:01.159835 containerd[1612]: 2026-01-28 01:40:00.697 [INFO][5621] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:40:01.159835 containerd[1612]: 2026-01-28 01:40:00.727 [INFO][5621] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:40:01.159835 containerd[1612]: 2026-01-28 01:40:00.731 [INFO][5621] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:40:01.159835 containerd[1612]: 2026-01-28 01:40:00.744 [INFO][5621] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:40:01.159835 containerd[1612]: 2026-01-28 01:40:00.748 [INFO][5621] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2143789891136698b394dcd0cc6eb9493fd76d1afc908c5d7f3726af230426b1" host="localhost" Jan 28 01:40:01.160495 containerd[1612]: 2026-01-28 01:40:00.759 [INFO][5621] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2143789891136698b394dcd0cc6eb9493fd76d1afc908c5d7f3726af230426b1 Jan 28 01:40:01.160495 containerd[1612]: 2026-01-28 01:40:00.782 [INFO][5621] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2143789891136698b394dcd0cc6eb9493fd76d1afc908c5d7f3726af230426b1" host="localhost" Jan 28 01:40:01.160495 containerd[1612]: 2026-01-28 01:40:00.808 [INFO][5621] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.2143789891136698b394dcd0cc6eb9493fd76d1afc908c5d7f3726af230426b1" host="localhost" Jan 28 01:40:01.160495 containerd[1612]: 2026-01-28 01:40:00.809 [INFO][5621] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.2143789891136698b394dcd0cc6eb9493fd76d1afc908c5d7f3726af230426b1" host="localhost" Jan 28 01:40:01.160495 containerd[1612]: 2026-01-28 01:40:00.810 [INFO][5621] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:40:01.160495 containerd[1612]: 2026-01-28 01:40:00.810 [INFO][5621] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="2143789891136698b394dcd0cc6eb9493fd76d1afc908c5d7f3726af230426b1" HandleID="k8s-pod-network.2143789891136698b394dcd0cc6eb9493fd76d1afc908c5d7f3726af230426b1" Workload="localhost-k8s-csi--node--driver--4245v-eth0" Jan 28 01:40:01.160662 containerd[1612]: 2026-01-28 01:40:00.821 [INFO][5603] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2143789891136698b394dcd0cc6eb9493fd76d1afc908c5d7f3726af230426b1" Namespace="calico-system" Pod="csi-node-driver-4245v" WorkloadEndpoint="localhost-k8s-csi--node--driver--4245v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4245v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 37, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-4245v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali195e0bfdd9c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:40:01.160814 containerd[1612]: 2026-01-28 01:40:00.822 [INFO][5603] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="2143789891136698b394dcd0cc6eb9493fd76d1afc908c5d7f3726af230426b1" Namespace="calico-system" Pod="csi-node-driver-4245v" WorkloadEndpoint="localhost-k8s-csi--node--driver--4245v-eth0" Jan 28 01:40:01.160814 containerd[1612]: 2026-01-28 01:40:00.822 [INFO][5603] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali195e0bfdd9c ContainerID="2143789891136698b394dcd0cc6eb9493fd76d1afc908c5d7f3726af230426b1" Namespace="calico-system" Pod="csi-node-driver-4245v" WorkloadEndpoint="localhost-k8s-csi--node--driver--4245v-eth0" Jan 28 01:40:01.160814 containerd[1612]: 2026-01-28 01:40:01.015 [INFO][5603] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2143789891136698b394dcd0cc6eb9493fd76d1afc908c5d7f3726af230426b1" Namespace="calico-system" Pod="csi-node-driver-4245v" WorkloadEndpoint="localhost-k8s-csi--node--driver--4245v-eth0" Jan 28 01:40:01.160910 containerd[1612]: 2026-01-28 01:40:01.022 [INFO][5603] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2143789891136698b394dcd0cc6eb9493fd76d1afc908c5d7f3726af230426b1" Namespace="calico-system" Pod="csi-node-driver-4245v" WorkloadEndpoint="localhost-k8s-csi--node--driver--4245v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4245v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 37, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2143789891136698b394dcd0cc6eb9493fd76d1afc908c5d7f3726af230426b1", Pod:"csi-node-driver-4245v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali195e0bfdd9c", MAC:"2e:13:1c:34:e7:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:40:01.161085 containerd[1612]: 2026-01-28 01:40:01.078 [INFO][5603] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2143789891136698b394dcd0cc6eb9493fd76d1afc908c5d7f3726af230426b1" Namespace="calico-system" Pod="csi-node-driver-4245v" WorkloadEndpoint="localhost-k8s-csi--node--driver--4245v-eth0" Jan 28 01:40:01.189073 systemd-networkd[1516]: cali06d0e8a60e6: Link UP Jan 28 01:40:01.191553 systemd-networkd[1516]: cali06d0e8a60e6: Gained carrier Jan 28 01:40:01.273056 containerd[1612]: 2026-01-28 01:40:00.402 [INFO][5633] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:40:01.273056 containerd[1612]: 2026-01-28 01:40:00.476 [INFO][5633] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--757b4d7df4--d9b2x-eth0 whisker-757b4d7df4- calico-system a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7 1565 0 2026-01-28 01:39:59 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:757b4d7df4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-757b4d7df4-d9b2x eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali06d0e8a60e6 [] [] }} ContainerID="0c34d6f67f20ae24fc8daf23eb9125aef0018fc9c3a1cb20abde89a3fa4e9edc" Namespace="calico-system" Pod="whisker-757b4d7df4-d9b2x" WorkloadEndpoint="localhost-k8s-whisker--757b4d7df4--d9b2x-" Jan 28 01:40:01.273056 containerd[1612]: 2026-01-28 01:40:00.478 [INFO][5633] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0c34d6f67f20ae24fc8daf23eb9125aef0018fc9c3a1cb20abde89a3fa4e9edc" Namespace="calico-system" Pod="whisker-757b4d7df4-d9b2x" WorkloadEndpoint="localhost-k8s-whisker--757b4d7df4--d9b2x-eth0" Jan 28 01:40:01.273056 containerd[1612]: 2026-01-28 01:40:00.640 [INFO][5655] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c34d6f67f20ae24fc8daf23eb9125aef0018fc9c3a1cb20abde89a3fa4e9edc" HandleID="k8s-pod-network.0c34d6f67f20ae24fc8daf23eb9125aef0018fc9c3a1cb20abde89a3fa4e9edc" Workload="localhost-k8s-whisker--757b4d7df4--d9b2x-eth0" Jan 28 01:40:01.273781 containerd[1612]: 2026-01-28 01:40:00.647 [INFO][5655] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0c34d6f67f20ae24fc8daf23eb9125aef0018fc9c3a1cb20abde89a3fa4e9edc" HandleID="k8s-pod-network.0c34d6f67f20ae24fc8daf23eb9125aef0018fc9c3a1cb20abde89a3fa4e9edc" Workload="localhost-k8s-whisker--757b4d7df4--d9b2x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00058d300), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-757b4d7df4-d9b2x", "timestamp":"2026-01-28 01:40:00.640130856 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:40:01.273781 containerd[1612]: 2026-01-28 01:40:00.648 [INFO][5655] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:40:01.273781 containerd[1612]: 2026-01-28 01:40:00.810 [INFO][5655] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:40:01.273781 containerd[1612]: 2026-01-28 01:40:00.811 [INFO][5655] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:40:01.273781 containerd[1612]: 2026-01-28 01:40:00.843 [INFO][5655] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0c34d6f67f20ae24fc8daf23eb9125aef0018fc9c3a1cb20abde89a3fa4e9edc" host="localhost" Jan 28 01:40:01.273781 containerd[1612]: 2026-01-28 01:40:00.890 [INFO][5655] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:40:01.273781 containerd[1612]: 2026-01-28 01:40:00.957 [INFO][5655] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:40:01.273781 containerd[1612]: 2026-01-28 01:40:01.001 [INFO][5655] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:40:01.273781 containerd[1612]: 2026-01-28 01:40:01.023 [INFO][5655] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:40:01.273781 containerd[1612]: 2026-01-28 01:40:01.023 [INFO][5655] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0c34d6f67f20ae24fc8daf23eb9125aef0018fc9c3a1cb20abde89a3fa4e9edc" host="localhost" Jan 28 01:40:01.280574 containerd[1612]: 2026-01-28 01:40:01.033 [INFO][5655] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0c34d6f67f20ae24fc8daf23eb9125aef0018fc9c3a1cb20abde89a3fa4e9edc Jan 28 01:40:01.280574 containerd[1612]: 2026-01-28 01:40:01.066 [INFO][5655] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0c34d6f67f20ae24fc8daf23eb9125aef0018fc9c3a1cb20abde89a3fa4e9edc" host="localhost" Jan 28 01:40:01.280574 containerd[1612]: 2026-01-28 01:40:01.105 [INFO][5655] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.0c34d6f67f20ae24fc8daf23eb9125aef0018fc9c3a1cb20abde89a3fa4e9edc" host="localhost" Jan 28 01:40:01.280574 containerd[1612]: 2026-01-28 01:40:01.105 [INFO][5655] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.0c34d6f67f20ae24fc8daf23eb9125aef0018fc9c3a1cb20abde89a3fa4e9edc" host="localhost" Jan 28 01:40:01.280574 containerd[1612]: 2026-01-28 01:40:01.105 [INFO][5655] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:40:01.280574 containerd[1612]: 2026-01-28 01:40:01.105 [INFO][5655] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="0c34d6f67f20ae24fc8daf23eb9125aef0018fc9c3a1cb20abde89a3fa4e9edc" HandleID="k8s-pod-network.0c34d6f67f20ae24fc8daf23eb9125aef0018fc9c3a1cb20abde89a3fa4e9edc" Workload="localhost-k8s-whisker--757b4d7df4--d9b2x-eth0" Jan 28 01:40:01.280819 containerd[1612]: 2026-01-28 01:40:01.161 [INFO][5633] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0c34d6f67f20ae24fc8daf23eb9125aef0018fc9c3a1cb20abde89a3fa4e9edc" Namespace="calico-system" Pod="whisker-757b4d7df4-d9b2x" WorkloadEndpoint="localhost-k8s-whisker--757b4d7df4--d9b2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--757b4d7df4--d9b2x-eth0", GenerateName:"whisker-757b4d7df4-", Namespace:"calico-system", SelfLink:"", UID:"a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7", ResourceVersion:"1565", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 39, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"757b4d7df4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-757b4d7df4-d9b2x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali06d0e8a60e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:40:01.280819 containerd[1612]: 2026-01-28 01:40:01.161 [INFO][5633] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="0c34d6f67f20ae24fc8daf23eb9125aef0018fc9c3a1cb20abde89a3fa4e9edc" Namespace="calico-system" Pod="whisker-757b4d7df4-d9b2x" WorkloadEndpoint="localhost-k8s-whisker--757b4d7df4--d9b2x-eth0" Jan 28 01:40:01.281016 containerd[1612]: 2026-01-28 01:40:01.161 [INFO][5633] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06d0e8a60e6 ContainerID="0c34d6f67f20ae24fc8daf23eb9125aef0018fc9c3a1cb20abde89a3fa4e9edc" Namespace="calico-system" Pod="whisker-757b4d7df4-d9b2x" WorkloadEndpoint="localhost-k8s-whisker--757b4d7df4--d9b2x-eth0" Jan 28 01:40:01.281016 containerd[1612]: 2026-01-28 01:40:01.193 [INFO][5633] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c34d6f67f20ae24fc8daf23eb9125aef0018fc9c3a1cb20abde89a3fa4e9edc" Namespace="calico-system" Pod="whisker-757b4d7df4-d9b2x" WorkloadEndpoint="localhost-k8s-whisker--757b4d7df4--d9b2x-eth0" Jan 28 01:40:01.281090 containerd[1612]: 2026-01-28 01:40:01.194 [INFO][5633] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0c34d6f67f20ae24fc8daf23eb9125aef0018fc9c3a1cb20abde89a3fa4e9edc" Namespace="calico-system" Pod="whisker-757b4d7df4-d9b2x" WorkloadEndpoint="localhost-k8s-whisker--757b4d7df4--d9b2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--757b4d7df4--d9b2x-eth0", GenerateName:"whisker-757b4d7df4-", Namespace:"calico-system", SelfLink:"", UID:"a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7", ResourceVersion:"1565", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 39, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"757b4d7df4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0c34d6f67f20ae24fc8daf23eb9125aef0018fc9c3a1cb20abde89a3fa4e9edc", Pod:"whisker-757b4d7df4-d9b2x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali06d0e8a60e6", MAC:"4a:e2:7a:d8:84:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:40:01.281582 containerd[1612]: 2026-01-28 01:40:01.263 [INFO][5633] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0c34d6f67f20ae24fc8daf23eb9125aef0018fc9c3a1cb20abde89a3fa4e9edc" Namespace="calico-system" Pod="whisker-757b4d7df4-d9b2x" WorkloadEndpoint="localhost-k8s-whisker--757b4d7df4--d9b2x-eth0" Jan 28 01:40:01.374875 containerd[1612]: time="2026-01-28T01:40:01.374629303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s28bh,Uid:55f83d8e-e337-4a1b-9dba-8df114668f11,Namespace:calico-system,Attempt:0,}" Jan 28 01:40:01.385833 kubelet[2938]: I0128 01:40:01.385756 2938 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e39d0520-24ca-4f27-b501-b31974cc3332" path="/var/lib/kubelet/pods/e39d0520-24ca-4f27-b501-b31974cc3332/volumes" Jan 28 01:40:01.676550 containerd[1612]: time="2026-01-28T01:40:01.676505526Z" level=info msg="connecting to shim 2143789891136698b394dcd0cc6eb9493fd76d1afc908c5d7f3726af230426b1" address="unix:///run/containerd/s/fb56dc1519efba2dd8d3a904894ed22610385c86a4531daf28c5da22db7d0d25" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:40:01.715866 containerd[1612]: time="2026-01-28T01:40:01.715756465Z" level=info msg="connecting to shim 0c34d6f67f20ae24fc8daf23eb9125aef0018fc9c3a1cb20abde89a3fa4e9edc" address="unix:///run/containerd/s/8ad87912c37be797115dc7c80d26342c01cc0dd14c394f34f93204e4187ce2a4" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:40:01.862886 systemd[1]: Started cri-containerd-2143789891136698b394dcd0cc6eb9493fd76d1afc908c5d7f3726af230426b1.scope - libcontainer container 2143789891136698b394dcd0cc6eb9493fd76d1afc908c5d7f3726af230426b1. Jan 28 01:40:02.031000 audit: BPF prog-id=187 op=LOAD Jan 28 01:40:02.039884 kernel: kauditd_printk_skb: 16 callbacks suppressed Jan 28 01:40:02.040061 kernel: audit: type=1334 audit(1769564402.031:676): prog-id=187 op=LOAD Jan 28 01:40:02.068000 audit: BPF prog-id=188 op=LOAD Jan 28 01:40:02.067089 systemd[1]: Started cri-containerd-0c34d6f67f20ae24fc8daf23eb9125aef0018fc9c3a1cb20abde89a3fa4e9edc.scope - libcontainer container 0c34d6f67f20ae24fc8daf23eb9125aef0018fc9c3a1cb20abde89a3fa4e9edc. Jan 28 01:40:02.082440 kernel: audit: type=1334 audit(1769564402.068:677): prog-id=188 op=LOAD Jan 28 01:40:02.068000 audit[5796]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5759 pid=5796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:02.144801 kernel: audit: type=1300 audit(1769564402.068:677): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5759 pid=5796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:02.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231343337383938393131333636393862333934646364306363366562 Jan 28 01:40:02.163786 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:40:02.203374 kernel: audit: type=1327 audit(1769564402.068:677): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231343337383938393131333636393862333934646364306363366562 Jan 28 01:40:02.068000 audit: BPF prog-id=188 op=UNLOAD Jan 28 01:40:02.068000 audit[5796]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5759 pid=5796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:02.258751 kernel: audit: type=1334 audit(1769564402.068:678): prog-id=188 op=UNLOAD Jan 28 01:40:02.259061 kernel: audit: type=1300 audit(1769564402.068:678): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5759 pid=5796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:02.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231343337383938393131333636393862333934646364306363366562 Jan 28 01:40:02.270716 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:40:02.072000 audit: BPF prog-id=189 op=LOAD Jan 28 01:40:02.307150 kernel: audit: type=1327 audit(1769564402.068:678): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231343337383938393131333636393862333934646364306363366562 Jan 28 01:40:02.307373 kernel: audit: type=1334 audit(1769564402.072:679): prog-id=189 op=LOAD Jan 28 01:40:02.072000 audit[5796]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5759 pid=5796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:02.341818 kernel: audit: type=1300 audit(1769564402.072:679): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5759 pid=5796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:02.341940 kernel: audit: type=1327 audit(1769564402.072:679): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231343337383938393131333636393862333934646364306363366562 Jan 28 01:40:02.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231343337383938393131333636393862333934646364306363366562 Jan 28 01:40:02.072000 audit: BPF prog-id=190 op=LOAD Jan 28 01:40:02.072000 audit[5796]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=5759 pid=5796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:02.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231343337383938393131333636393862333934646364306363366562 Jan 28 01:40:02.072000 audit: BPF prog-id=190 op=UNLOAD Jan 28 01:40:02.072000 audit[5796]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5759 pid=5796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:02.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231343337383938393131333636393862333934646364306363366562 Jan 28 01:40:02.072000 audit: BPF prog-id=189 op=UNLOAD Jan 28 01:40:02.072000 audit[5796]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5759 pid=5796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:02.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231343337383938393131333636393862333934646364306363366562 Jan 28 01:40:02.072000 audit: BPF prog-id=191 op=LOAD Jan 28 01:40:02.072000 audit[5796]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=5759 pid=5796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:02.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231343337383938393131333636393862333934646364306363366562 Jan 28 01:40:02.169000 audit: BPF prog-id=192 op=LOAD Jan 28 01:40:02.171000 audit: BPF prog-id=193 op=LOAD Jan 28 01:40:02.171000 audit[5842]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000160238 a2=98 a3=0 items=0 ppid=5780 pid=5842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:02.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063333464366636376632306165323466633864616632336562393132 Jan 28 01:40:02.171000 audit: BPF prog-id=193 op=UNLOAD Jan 28 01:40:02.171000 audit[5842]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5780 pid=5842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:02.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063333464366636376632306165323466633864616632336562393132 Jan 28 01:40:02.173000 audit: BPF prog-id=194 op=LOAD Jan 28 01:40:02.173000 audit[5842]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000160488 a2=98 a3=0 items=0 ppid=5780 pid=5842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:02.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063333464366636376632306165323466633864616632336562393132 Jan 28 01:40:02.174000 audit: BPF prog-id=195 op=LOAD Jan 28 01:40:02.174000 audit[5842]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000160218 a2=98 a3=0 items=0 ppid=5780 pid=5842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:02.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063333464366636376632306165323466633864616632336562393132 Jan 28 01:40:02.174000 audit: BPF prog-id=195 op=UNLOAD Jan 28 01:40:02.174000 audit[5842]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5780 pid=5842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:02.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063333464366636376632306165323466633864616632336562393132 Jan 28 01:40:02.174000 audit: BPF prog-id=194 op=UNLOAD Jan 28 01:40:02.174000 audit[5842]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5780 pid=5842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:02.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063333464366636376632306165323466633864616632336562393132 Jan 28 01:40:02.174000 audit: BPF prog-id=196 op=LOAD Jan 28 01:40:02.174000 audit[5842]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001606e8 a2=98 a3=0 items=0 ppid=5780 pid=5842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:02.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063333464366636376632306165323466633864616632336562393132 Jan 28 01:40:02.454059 systemd-networkd[1516]: cali195e0bfdd9c: Gained IPv6LL Jan 28 01:40:02.566477 systemd-networkd[1516]: cali8c40b6636c2: Link UP Jan 28 01:40:02.571794 systemd-networkd[1516]: cali8c40b6636c2: Gained carrier Jan 28 01:40:02.627353 containerd[1612]: time="2026-01-28T01:40:02.626785098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4245v,Uid:a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"2143789891136698b394dcd0cc6eb9493fd76d1afc908c5d7f3726af230426b1\"" Jan 28 01:40:02.641204 containerd[1612]: time="2026-01-28T01:40:02.641019109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:40:02.714662 containerd[1612]: 2026-01-28 01:40:01.733 [INFO][5682] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:40:02.714662 containerd[1612]: 2026-01-28 01:40:01.826 [INFO][5682] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--s28bh-eth0 goldmane-666569f655- calico-system 55f83d8e-e337-4a1b-9dba-8df114668f11 1235 0 2026-01-28 01:37:18 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-s28bh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8c40b6636c2 [] [] }} ContainerID="f209848391831831741156241663e8eb8deeae41d5773698eefb7ebf7f9a84dc" Namespace="calico-system" Pod="goldmane-666569f655-s28bh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--s28bh-" Jan 28 01:40:02.714662 containerd[1612]: 2026-01-28 01:40:01.826 [INFO][5682] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f209848391831831741156241663e8eb8deeae41d5773698eefb7ebf7f9a84dc" Namespace="calico-system" Pod="goldmane-666569f655-s28bh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--s28bh-eth0" Jan 28 01:40:02.714662 containerd[1612]: 2026-01-28 01:40:02.056 [INFO][5845] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f209848391831831741156241663e8eb8deeae41d5773698eefb7ebf7f9a84dc" HandleID="k8s-pod-network.f209848391831831741156241663e8eb8deeae41d5773698eefb7ebf7f9a84dc" Workload="localhost-k8s-goldmane--666569f655--s28bh-eth0" Jan 28 01:40:02.715130 containerd[1612]: 2026-01-28 01:40:02.056 [INFO][5845] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f209848391831831741156241663e8eb8deeae41d5773698eefb7ebf7f9a84dc" HandleID="k8s-pod-network.f209848391831831741156241663e8eb8deeae41d5773698eefb7ebf7f9a84dc" Workload="localhost-k8s-goldmane--666569f655--s28bh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f750), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-s28bh", "timestamp":"2026-01-28 01:40:02.056091453 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:40:02.715130 containerd[1612]: 2026-01-28 01:40:02.056 [INFO][5845] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:40:02.715130 containerd[1612]: 2026-01-28 01:40:02.057 [INFO][5845] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:40:02.715130 containerd[1612]: 2026-01-28 01:40:02.057 [INFO][5845] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:40:02.715130 containerd[1612]: 2026-01-28 01:40:02.088 [INFO][5845] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f209848391831831741156241663e8eb8deeae41d5773698eefb7ebf7f9a84dc" host="localhost" Jan 28 01:40:02.715130 containerd[1612]: 2026-01-28 01:40:02.140 [INFO][5845] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:40:02.715130 containerd[1612]: 2026-01-28 01:40:02.190 [INFO][5845] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:40:02.715130 containerd[1612]: 2026-01-28 01:40:02.228 [INFO][5845] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:40:02.715130 containerd[1612]: 2026-01-28 01:40:02.265 [INFO][5845] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:40:02.715130 containerd[1612]: 2026-01-28 01:40:02.265 [INFO][5845] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f209848391831831741156241663e8eb8deeae41d5773698eefb7ebf7f9a84dc" host="localhost" Jan 28 01:40:02.715902 containerd[1612]: 2026-01-28 01:40:02.275 [INFO][5845] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f209848391831831741156241663e8eb8deeae41d5773698eefb7ebf7f9a84dc Jan 28 01:40:02.715902 containerd[1612]: 2026-01-28 01:40:02.304 [INFO][5845] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f209848391831831741156241663e8eb8deeae41d5773698eefb7ebf7f9a84dc" host="localhost" Jan 28 01:40:02.715902 containerd[1612]: 2026-01-28 01:40:02.331 [INFO][5845] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f209848391831831741156241663e8eb8deeae41d5773698eefb7ebf7f9a84dc" host="localhost" Jan 28 01:40:02.715902 containerd[1612]: 2026-01-28 01:40:02.332 [INFO][5845] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f209848391831831741156241663e8eb8deeae41d5773698eefb7ebf7f9a84dc" host="localhost" Jan 28 01:40:02.715902 containerd[1612]: 2026-01-28 01:40:02.332 [INFO][5845] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:40:02.715902 containerd[1612]: 2026-01-28 01:40:02.332 [INFO][5845] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f209848391831831741156241663e8eb8deeae41d5773698eefb7ebf7f9a84dc" HandleID="k8s-pod-network.f209848391831831741156241663e8eb8deeae41d5773698eefb7ebf7f9a84dc" Workload="localhost-k8s-goldmane--666569f655--s28bh-eth0" Jan 28 01:40:02.716104 containerd[1612]: 2026-01-28 01:40:02.360 [INFO][5682] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f209848391831831741156241663e8eb8deeae41d5773698eefb7ebf7f9a84dc" Namespace="calico-system" Pod="goldmane-666569f655-s28bh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--s28bh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--s28bh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"55f83d8e-e337-4a1b-9dba-8df114668f11", ResourceVersion:"1235", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 37, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-s28bh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8c40b6636c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:40:02.716104 containerd[1612]: 2026-01-28 01:40:02.361 [INFO][5682] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f209848391831831741156241663e8eb8deeae41d5773698eefb7ebf7f9a84dc" Namespace="calico-system" Pod="goldmane-666569f655-s28bh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--s28bh-eth0" Jan 28 01:40:02.718864 containerd[1612]: 2026-01-28 01:40:02.361 [INFO][5682] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c40b6636c2 ContainerID="f209848391831831741156241663e8eb8deeae41d5773698eefb7ebf7f9a84dc" Namespace="calico-system" Pod="goldmane-666569f655-s28bh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--s28bh-eth0" Jan 28 01:40:02.718864 containerd[1612]: 2026-01-28 01:40:02.560 [INFO][5682] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f209848391831831741156241663e8eb8deeae41d5773698eefb7ebf7f9a84dc" Namespace="calico-system" Pod="goldmane-666569f655-s28bh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--s28bh-eth0" Jan 28 01:40:02.718943 containerd[1612]: 2026-01-28 01:40:02.583 [INFO][5682] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f209848391831831741156241663e8eb8deeae41d5773698eefb7ebf7f9a84dc" Namespace="calico-system" Pod="goldmane-666569f655-s28bh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--s28bh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--s28bh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"55f83d8e-e337-4a1b-9dba-8df114668f11", ResourceVersion:"1235", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 37, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f209848391831831741156241663e8eb8deeae41d5773698eefb7ebf7f9a84dc", Pod:"goldmane-666569f655-s28bh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8c40b6636c2", MAC:"a6:53:40:2b:0c:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:40:02.719110 containerd[1612]: 2026-01-28 01:40:02.669 [INFO][5682] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f209848391831831741156241663e8eb8deeae41d5773698eefb7ebf7f9a84dc" Namespace="calico-system" Pod="goldmane-666569f655-s28bh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--s28bh-eth0" Jan 28 01:40:02.831396 containerd[1612]: time="2026-01-28T01:40:02.829416058Z" level=info msg="connecting to shim f209848391831831741156241663e8eb8deeae41d5773698eefb7ebf7f9a84dc" address="unix:///run/containerd/s/c71a9fda6425785006241747b497b16df36cbf50602d98ba51e44d772d4c6bf8" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:40:02.868416 containerd[1612]: time="2026-01-28T01:40:02.863867408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-757b4d7df4-d9b2x,Uid:a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7,Namespace:calico-system,Attempt:0,} returns sandbox id \"0c34d6f67f20ae24fc8daf23eb9125aef0018fc9c3a1cb20abde89a3fa4e9edc\"" Jan 28 01:40:02.877437 containerd[1612]: time="2026-01-28T01:40:02.877109520Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:40:02.894099 containerd[1612]: time="2026-01-28T01:40:02.887207008Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:40:02.894099 containerd[1612]: time="2026-01-28T01:40:02.889071570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 01:40:02.894489 kubelet[2938]: E0128 01:40:02.893141 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:40:02.894489 kubelet[2938]: E0128 01:40:02.893220 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:40:02.901754 kubelet[2938]: E0128 01:40:02.900888 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwnhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4245v_calico-system(a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:40:02.908955 containerd[1612]: time="2026-01-28T01:40:02.907947185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:40:03.034710 containerd[1612]: time="2026-01-28T01:40:03.034581148Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:40:03.038485 containerd[1612]: time="2026-01-28T01:40:03.038089490Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:40:03.038485 containerd[1612]: time="2026-01-28T01:40:03.038363531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 01:40:03.038862 kubelet[2938]: E0128 01:40:03.038531 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:40:03.038862 kubelet[2938]: E0128 01:40:03.038594 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:40:03.039133 kubelet[2938]: E0128 01:40:03.038858 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:49231b61cab941e2b913be4ad476f1ab,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ppmfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757b4d7df4-d9b2x_calico-system(a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:40:03.046937 containerd[1612]: time="2026-01-28T01:40:03.040034889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:40:03.065586 systemd[1]: Started cri-containerd-f209848391831831741156241663e8eb8deeae41d5773698eefb7ebf7f9a84dc.scope - libcontainer container f209848391831831741156241663e8eb8deeae41d5773698eefb7ebf7f9a84dc. Jan 28 01:40:03.156696 systemd-networkd[1516]: cali06d0e8a60e6: Gained IPv6LL Jan 28 01:40:03.195855 containerd[1612]: time="2026-01-28T01:40:03.195800001Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:40:03.201059 containerd[1612]: time="2026-01-28T01:40:03.201007349Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:40:03.201840 containerd[1612]: time="2026-01-28T01:40:03.201625654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 01:40:03.204802 kubelet[2938]: E0128 01:40:03.204417 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:40:03.205099 kubelet[2938]: E0128 01:40:03.204949 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:40:03.207587 kubelet[2938]: E0128 01:40:03.207458 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwnhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4245v_calico-system(a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:40:03.207804 containerd[1612]: time="2026-01-28T01:40:03.207566592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:40:03.209755 kubelet[2938]: E0128 01:40:03.209583 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:40:03.212000 audit: BPF prog-id=197 op=LOAD Jan 28 01:40:03.214000 audit: BPF prog-id=198 op=LOAD Jan 28 01:40:03.214000 audit[5940]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5928 pid=5940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:03.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632303938343833393138333138333137343131353632343136363365 Jan 28 01:40:03.214000 audit: BPF prog-id=198 op=UNLOAD Jan 28 01:40:03.214000 audit[5940]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5928 pid=5940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:03.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632303938343833393138333138333137343131353632343136363365 Jan 28 01:40:03.215000 audit: BPF prog-id=199 op=LOAD Jan 28 01:40:03.215000 audit[5940]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5928 pid=5940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:03.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632303938343833393138333138333137343131353632343136363365 Jan 28 01:40:03.215000 audit: BPF prog-id=200 op=LOAD Jan 28 01:40:03.215000 audit[5940]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5928 pid=5940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:03.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632303938343833393138333138333137343131353632343136363365 Jan 28 01:40:03.217000 audit: BPF prog-id=200 op=UNLOAD Jan 28 01:40:03.217000 audit[5940]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5928 pid=5940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:03.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632303938343833393138333138333137343131353632343136363365 Jan 28 01:40:03.217000 audit: BPF prog-id=199 op=UNLOAD Jan 28 01:40:03.217000 audit[5940]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5928 pid=5940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:03.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632303938343833393138333138333137343131353632343136363365 Jan 28 01:40:03.217000 audit: BPF prog-id=201 op=LOAD Jan 28 01:40:03.217000 audit[5940]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5928 pid=5940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:03.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632303938343833393138333138333137343131353632343136363365 Jan 28 01:40:03.221455 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:40:03.331033 containerd[1612]: time="2026-01-28T01:40:03.321072359Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:40:03.342726 containerd[1612]: time="2026-01-28T01:40:03.335403342Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:40:03.342726 containerd[1612]: time="2026-01-28T01:40:03.336493286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 01:40:03.344913 kubelet[2938]: E0128 01:40:03.344866 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:40:03.345037 kubelet[2938]: E0128 01:40:03.345015 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:40:03.365195 kubelet[2938]: E0128 01:40:03.345207 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppmfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757b4d7df4-d9b2x_calico-system(a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:40:03.381589 kubelet[2938]: E0128 01:40:03.373815 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-757b4d7df4-d9b2x" podUID="a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7" Jan 28 01:40:03.836896 containerd[1612]: time="2026-01-28T01:40:03.836494559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s28bh,Uid:55f83d8e-e337-4a1b-9dba-8df114668f11,Namespace:calico-system,Attempt:0,} returns sandbox id \"f209848391831831741156241663e8eb8deeae41d5773698eefb7ebf7f9a84dc\"" Jan 28 01:40:03.852899 kubelet[2938]: E0128 01:40:03.852137 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:40:03.871933 containerd[1612]: time="2026-01-28T01:40:03.861942398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:40:03.872141 kubelet[2938]: E0128 01:40:03.869896 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-757b4d7df4-d9b2x" podUID="a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7" Jan 28 01:40:03.972000 audit: BPF prog-id=202 op=LOAD Jan 28 01:40:03.972000 audit[5981]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec3369e50 a2=98 a3=1fffffffffffffff items=0 ppid=5767 pid=5981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:03.972000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:40:03.972000 audit: BPF prog-id=202 op=UNLOAD Jan 28 01:40:03.972000 audit[5981]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffec3369e20 a3=0 items=0 ppid=5767 pid=5981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:03.972000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:40:03.972000 audit: BPF prog-id=203 op=LOAD Jan 28 01:40:03.972000 audit[5981]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec3369d30 a2=94 a3=3 items=0 ppid=5767 pid=5981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:03.972000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:40:03.972000 audit: BPF prog-id=203 op=UNLOAD Jan 28 01:40:03.972000 audit[5981]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffec3369d30 a2=94 a3=3 items=0 ppid=5767 pid=5981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:03.972000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:40:03.972000 audit: BPF prog-id=204 op=LOAD Jan 28 01:40:03.972000 audit[5981]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec3369d70 a2=94 a3=7ffec3369f50 items=0 ppid=5767 pid=5981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:03.972000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:40:03.984000 audit: BPF prog-id=204 op=UNLOAD Jan 28 01:40:03.984000 audit[5981]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffec3369d70 a2=94 a3=7ffec3369f50 items=0 ppid=5767 pid=5981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:03.984000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:40:03.999000 audit: BPF prog-id=205 op=LOAD Jan 28 01:40:03.999000 audit[5982]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffebf95f5c0 a2=98 a3=3 items=0 ppid=5767 pid=5982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:03.999000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:40:04.000000 audit: BPF prog-id=205 op=UNLOAD Jan 28 01:40:04.000000 audit[5982]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffebf95f590 a3=0 items=0 ppid=5767 pid=5982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:04.000000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:40:04.007000 audit: BPF prog-id=206 op=LOAD Jan 28 01:40:04.007000 audit[5982]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffebf95f3b0 a2=94 a3=54428f items=0 ppid=5767 pid=5982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:04.007000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:40:04.013000 audit: BPF prog-id=206 op=UNLOAD Jan 28 01:40:04.013000 audit[5982]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffebf95f3b0 a2=94 a3=54428f items=0 ppid=5767 pid=5982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:04.013000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:40:04.013000 audit: BPF prog-id=207 op=LOAD Jan 28 01:40:04.013000 audit[5982]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffebf95f3e0 a2=94 a3=2 items=0 ppid=5767 pid=5982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:04.013000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:40:04.017000 audit: BPF prog-id=207 op=UNLOAD Jan 28 01:40:04.017000 audit[5982]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffebf95f3e0 a2=0 a3=2 items=0 ppid=5767 pid=5982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:04.017000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:40:04.029016 containerd[1612]: time="2026-01-28T01:40:04.023735477Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:40:04.036403 containerd[1612]: time="2026-01-28T01:40:04.033700453Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:40:04.036403 containerd[1612]: time="2026-01-28T01:40:04.033817981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 01:40:04.066151 kubelet[2938]: E0128 01:40:04.062202 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:40:04.066151 kubelet[2938]: E0128 01:40:04.062406 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:40:04.066151 kubelet[2938]: E0128 01:40:04.062575 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqn7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-s28bh_calico-system(55f83d8e-e337-4a1b-9dba-8df114668f11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:40:04.072357 kubelet[2938]: E0128 01:40:04.067440 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s28bh" podUID="55f83d8e-e337-4a1b-9dba-8df114668f11" Jan 28 01:40:04.373984 kubelet[2938]: E0128 01:40:04.373729 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:40:04.374994 containerd[1612]: time="2026-01-28T01:40:04.374915748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2lrhs,Uid:d50b0b36-811a-467b-a5ed-e0483bb76784,Namespace:kube-system,Attempt:0,}" Jan 28 01:40:04.472000 audit[5984]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=5984 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:40:04.472000 audit[5984]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffffbe0d080 a2=0 a3=7ffffbe0d06c items=0 ppid=3042 pid=5984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:04.472000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:40:04.568741 systemd-networkd[1516]: cali8c40b6636c2: Gained IPv6LL Jan 28 01:40:04.581000 audit[5984]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=5984 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:40:04.581000 audit[5984]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffffbe0d080 a2=0 a3=0 items=0 ppid=3042 pid=5984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:04.581000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:40:04.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.88:22-10.0.0.1:48132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:40:04.617996 systemd[1]: Started sshd@14-10.0.0.88:22-10.0.0.1:48132.service - OpenSSH per-connection server daemon (10.0.0.1:48132). Jan 28 01:40:04.965511 kubelet[2938]: E0128 01:40:04.965140 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s28bh" podUID="55f83d8e-e337-4a1b-9dba-8df114668f11" Jan 28 01:40:04.988930 kubelet[2938]: E0128 01:40:04.987488 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:40:04.988930 kubelet[2938]: E0128 01:40:04.987658 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-757b4d7df4-d9b2x" podUID="a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7" Jan 28 01:40:05.229000 audit[5999]: USER_ACCT pid=5999 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:05.232000 audit[5999]: CRED_ACQ pid=5999 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:05.232000 audit[5999]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd34c530a0 a2=3 a3=0 items=0 ppid=1 pid=5999 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:05.232000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:40:05.235217 sshd[5999]: Accepted publickey for core from 10.0.0.1 port 48132 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:40:05.236526 sshd-session[5999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:40:05.260000 audit[6015]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=6015 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:40:05.260000 audit[6015]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd23f09390 a2=0 a3=7ffd23f0937c items=0 ppid=3042 pid=6015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:05.260000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:40:05.267000 audit: BPF prog-id=208 op=LOAD Jan 28 01:40:05.267000 audit[5982]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffebf95f2a0 a2=94 a3=1 items=0 ppid=5767 pid=5982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:05.267000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:40:05.267000 audit: BPF prog-id=208 op=UNLOAD Jan 28 01:40:05.267000 audit[5982]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffebf95f2a0 a2=94 a3=1 items=0 ppid=5767 pid=5982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:05.267000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:40:05.292000 audit[6015]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=6015 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:40:05.304000 audit: BPF prog-id=209 op=LOAD Jan 28 01:40:05.304000 audit[5982]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffebf95f290 a2=94 a3=4 items=0 ppid=5767 pid=5982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:05.304000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:40:05.304000 audit: BPF prog-id=209 op=UNLOAD Jan 28 01:40:05.304000 audit[5982]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffebf95f290 a2=0 a3=4 items=0 ppid=5767 pid=5982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:05.304000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:40:05.308000 audit: BPF prog-id=210 op=LOAD Jan 28 01:40:05.308000 audit[5982]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffebf95f0f0 a2=94 a3=5 items=0 ppid=5767 pid=5982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:05.308000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:40:05.308000 audit: BPF prog-id=210 op=UNLOAD Jan 28 01:40:05.308000 audit[5982]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffebf95f0f0 a2=0 a3=5 items=0 ppid=5767 pid=5982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:05.308000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:40:05.308000 audit: BPF prog-id=211 op=LOAD Jan 28 01:40:05.308000 audit[5982]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffebf95f310 a2=94 a3=6 items=0 ppid=5767 pid=5982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:05.308000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:40:05.308000 audit: BPF prog-id=211 op=UNLOAD Jan 28 01:40:05.308000 audit[5982]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffebf95f310 a2=0 a3=6 items=0 ppid=5767 pid=5982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:05.308000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:40:05.310000 audit: BPF prog-id=212 op=LOAD Jan 28 01:40:05.310000 audit[5982]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffebf95eac0 a2=94 a3=88 items=0 ppid=5767 pid=5982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:05.310000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:40:05.310000 audit: BPF prog-id=213 op=LOAD Jan 28 01:40:05.310000 audit[5982]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffebf95e940 a2=94 a3=2 items=0 ppid=5767 pid=5982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:05.310000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:40:05.310000 audit: BPF prog-id=213 op=UNLOAD Jan 28 01:40:05.310000 audit[5982]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffebf95e970 a2=0 a3=7ffebf95ea70 items=0 ppid=5767 pid=5982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:05.310000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:40:05.312000 audit: BPF prog-id=212 op=UNLOAD Jan 28 01:40:05.312000 audit[5982]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=29f69d10 a2=0 a3=38703f4eceff9ed7 items=0 ppid=5767 pid=5982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:05.312000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:40:05.292000 audit[6015]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd23f09390 a2=0 a3=0 items=0 ppid=3042 pid=6015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:05.292000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:40:05.389516 systemd-logind[1594]: New session 15 of user core. Jan 28 01:40:05.394000 audit: BPF prog-id=214 op=LOAD Jan 28 01:40:05.394000 audit[6018]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd9121be0 a2=98 a3=1999999999999999 items=0 ppid=5767 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:05.394000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:40:05.407000 audit: BPF prog-id=214 op=UNLOAD Jan 28 01:40:05.407000 audit[6018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcd9121bb0 a3=0 items=0 ppid=5767 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:05.407000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:40:05.407000 audit: BPF prog-id=215 op=LOAD Jan 28 01:40:05.407000 audit[6018]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd9121ac0 a2=94 a3=ffff items=0 ppid=5767 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:05.407000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:40:05.413000 audit: BPF prog-id=215 op=UNLOAD Jan 28 01:40:05.413000 audit[6018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcd9121ac0 a2=94 a3=ffff items=0 ppid=5767 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:05.413000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:40:05.413000 audit: BPF prog-id=216 op=LOAD Jan 28 01:40:05.413000 audit[6018]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd9121b00 a2=94 a3=7ffcd9121ce0 items=0 ppid=5767 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:05.413000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:40:05.413000 audit: BPF prog-id=216 op=UNLOAD Jan 28 01:40:05.413000 audit[6018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcd9121b00 a2=94 a3=7ffcd9121ce0 items=0 ppid=5767 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:05.413000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:40:05.413643 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 01:40:05.492000 audit[5999]: USER_START pid=5999 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:05.519000 audit[6021]: CRED_ACQ pid=6021 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:06.024477 kubelet[2938]: E0128 01:40:06.021636 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s28bh" podUID="55f83d8e-e337-4a1b-9dba-8df114668f11" Jan 28 01:40:06.023649 systemd-networkd[1516]: calid9a05b5ad66: Link UP Jan 28 01:40:06.047955 systemd-networkd[1516]: calid9a05b5ad66: Gained carrier Jan 28 01:40:06.328482 containerd[1612]: 2026-01-28 01:40:04.673 [INFO][5985] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--2lrhs-eth0 coredns-668d6bf9bc- kube-system d50b0b36-811a-467b-a5ed-e0483bb76784 1228 0 2026-01-28 01:35:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-2lrhs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid9a05b5ad66 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2lrhs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2lrhs-" Jan 28 01:40:06.328482 containerd[1612]: 2026-01-28 01:40:04.678 [INFO][5985] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2lrhs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2lrhs-eth0" Jan 28 01:40:06.328482 containerd[1612]: 2026-01-28 01:40:04.955 [INFO][6001] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b" HandleID="k8s-pod-network.189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b" Workload="localhost-k8s-coredns--668d6bf9bc--2lrhs-eth0" Jan 28 01:40:06.329783 containerd[1612]: 2026-01-28 01:40:04.956 [INFO][6001] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b" HandleID="k8s-pod-network.189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b" Workload="localhost-k8s-coredns--668d6bf9bc--2lrhs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b3b00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-2lrhs", "timestamp":"2026-01-28 01:40:04.955989596 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:40:06.329783 containerd[1612]: 2026-01-28 01:40:04.956 [INFO][6001] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:40:06.329783 containerd[1612]: 2026-01-28 01:40:04.956 [INFO][6001] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:40:06.329783 containerd[1612]: 2026-01-28 01:40:04.956 [INFO][6001] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:40:06.329783 containerd[1612]: 2026-01-28 01:40:05.048 [INFO][6001] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b" host="localhost" Jan 28 01:40:06.329783 containerd[1612]: 2026-01-28 01:40:05.123 [INFO][6001] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:40:06.329783 containerd[1612]: 2026-01-28 01:40:05.216 [INFO][6001] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:40:06.329783 containerd[1612]: 2026-01-28 01:40:05.390 [INFO][6001] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:40:06.329783 containerd[1612]: 2026-01-28 01:40:05.452 [INFO][6001] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:40:06.329783 containerd[1612]: 2026-01-28 01:40:05.516 [INFO][6001] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b" host="localhost" Jan 28 01:40:06.340854 containerd[1612]: 2026-01-28 01:40:05.598 [INFO][6001] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b Jan 28 01:40:06.340854 containerd[1612]: 2026-01-28 01:40:05.831 [INFO][6001] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b" host="localhost" Jan 28 01:40:06.340854 containerd[1612]: 2026-01-28 01:40:05.938 [INFO][6001] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b" host="localhost" Jan 28 01:40:06.340854 containerd[1612]: 2026-01-28 01:40:05.938 [INFO][6001] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b" host="localhost" Jan 28 01:40:06.340854 containerd[1612]: 2026-01-28 01:40:05.938 [INFO][6001] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:40:06.340854 containerd[1612]: 2026-01-28 01:40:05.938 [INFO][6001] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b" HandleID="k8s-pod-network.189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b" Workload="localhost-k8s-coredns--668d6bf9bc--2lrhs-eth0" Jan 28 01:40:06.356948 containerd[1612]: 2026-01-28 01:40:05.979 [INFO][5985] cni-plugin/k8s.go 418: Populated endpoint ContainerID="189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2lrhs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2lrhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--2lrhs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d50b0b36-811a-467b-a5ed-e0483bb76784", ResourceVersion:"1228", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 35, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-2lrhs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid9a05b5ad66", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:40:06.357124 containerd[1612]: 2026-01-28 01:40:05.979 [INFO][5985] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2lrhs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2lrhs-eth0" Jan 28 01:40:06.357124 containerd[1612]: 2026-01-28 01:40:05.979 [INFO][5985] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid9a05b5ad66 ContainerID="189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2lrhs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2lrhs-eth0" Jan 28 01:40:06.357124 containerd[1612]: 2026-01-28 01:40:06.055 [INFO][5985] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2lrhs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2lrhs-eth0" Jan 28 01:40:06.357227 containerd[1612]: 2026-01-28 01:40:06.056 [INFO][5985] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2lrhs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2lrhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--2lrhs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d50b0b36-811a-467b-a5ed-e0483bb76784", ResourceVersion:"1228", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 35, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b", Pod:"coredns-668d6bf9bc-2lrhs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid9a05b5ad66", MAC:"9e:03:bd:b0:f9:26", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:40:06.357227 containerd[1612]: 2026-01-28 01:40:06.291 [INFO][5985] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2lrhs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2lrhs-eth0" Jan 28 01:40:06.369367 kubelet[2938]: E0128 01:40:06.369194 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:40:06.712501 sshd[6021]: Connection closed by 10.0.0.1 port 48132 Jan 28 01:40:06.738982 sshd-session[5999]: pam_unix(sshd:session): session closed for user core Jan 28 01:40:06.773000 audit[5999]: USER_END pid=5999 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:06.773000 audit[5999]: CRED_DISP pid=5999 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:06.785214 containerd[1612]: time="2026-01-28T01:40:06.784994844Z" level=info msg="connecting to shim 189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b" address="unix:///run/containerd/s/2096130ef8f0e1b9301d5473a9f0a4c6300378c8dbf978819aba29b867a1fd7e" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:40:06.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.88:22-10.0.0.1:48132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:40:06.796693 systemd[1]: sshd@14-10.0.0.88:22-10.0.0.1:48132.service: Deactivated successfully. Jan 28 01:40:06.805091 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 01:40:06.815597 systemd-logind[1594]: Session 15 logged out. Waiting for processes to exit. Jan 28 01:40:06.819538 systemd-logind[1594]: Removed session 15. Jan 28 01:40:06.929067 systemd-networkd[1516]: vxlan.calico: Link UP Jan 28 01:40:06.929080 systemd-networkd[1516]: vxlan.calico: Gained carrier Jan 28 01:40:06.998748 systemd[1]: Started cri-containerd-189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b.scope - libcontainer container 189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b. Jan 28 01:40:07.129660 kernel: kauditd_printk_skb: 169 callbacks suppressed Jan 28 01:40:07.129824 kernel: audit: type=1334 audit(1769564407.118:743): prog-id=217 op=LOAD Jan 28 01:40:07.118000 audit: BPF prog-id=217 op=LOAD Jan 28 01:40:07.122000 audit: BPF prog-id=218 op=LOAD Jan 28 01:40:07.142194 kernel: audit: type=1334 audit(1769564407.122:744): prog-id=218 op=LOAD Jan 28 01:40:07.141919 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:40:07.122000 audit[6083]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=6067 pid=6083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.171494 kernel: audit: type=1300 audit(1769564407.122:744): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=6067 pid=6083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.171785 kernel: audit: type=1327 audit(1769564407.122:744): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138396661376631383836336337363931356565626465653736613538 Jan 28 01:40:07.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138396661376631383836336337363931356565626465653736613538 Jan 28 01:40:07.122000 audit: BPF prog-id=218 op=UNLOAD Jan 28 01:40:07.122000 audit[6083]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6067 pid=6083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.228832 kernel: audit: type=1334 audit(1769564407.122:745): prog-id=218 op=UNLOAD Jan 28 01:40:07.228993 kernel: audit: type=1300 audit(1769564407.122:745): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6067 pid=6083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138396661376631383836336337363931356565626465653736613538 Jan 28 01:40:07.292851 kernel: audit: type=1327 audit(1769564407.122:745): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138396661376631383836336337363931356565626465653736613538 Jan 28 01:40:07.122000 audit: BPF prog-id=219 op=LOAD Jan 28 01:40:07.122000 audit[6083]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=6067 pid=6083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.339962 kernel: audit: type=1334 audit(1769564407.122:746): prog-id=219 op=LOAD Jan 28 01:40:07.341019 kernel: audit: type=1300 audit(1769564407.122:746): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=6067 pid=6083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138396661376631383836336337363931356565626465653736613538 Jan 28 01:40:07.369869 kubelet[2938]: E0128 01:40:07.369057 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:40:07.373418 kernel: audit: type=1327 audit(1769564407.122:746): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138396661376631383836336337363931356565626465653736613538 Jan 28 01:40:07.123000 audit: BPF prog-id=220 op=LOAD Jan 28 01:40:07.123000 audit[6083]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=6067 pid=6083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.123000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138396661376631383836336337363931356565626465653736613538 Jan 28 01:40:07.123000 audit: BPF prog-id=220 op=UNLOAD Jan 28 01:40:07.123000 audit[6083]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6067 pid=6083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.123000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138396661376631383836336337363931356565626465653736613538 Jan 28 01:40:07.123000 audit: BPF prog-id=219 op=UNLOAD Jan 28 01:40:07.123000 audit[6083]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6067 pid=6083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.123000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138396661376631383836336337363931356565626465653736613538 Jan 28 01:40:07.123000 audit: BPF prog-id=221 op=LOAD Jan 28 01:40:07.123000 audit[6083]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=6067 pid=6083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.123000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138396661376631383836336337363931356565626465653736613538 Jan 28 01:40:07.301000 audit: BPF prog-id=222 op=LOAD Jan 28 01:40:07.301000 audit[6112]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe04843b40 a2=98 a3=0 items=0 ppid=5767 pid=6112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.301000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:40:07.301000 audit: BPF prog-id=222 op=UNLOAD Jan 28 01:40:07.301000 audit[6112]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe04843b10 a3=0 items=0 ppid=5767 pid=6112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.301000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:40:07.301000 audit: BPF prog-id=223 op=LOAD Jan 28 01:40:07.301000 audit[6112]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe04843950 a2=94 a3=54428f items=0 ppid=5767 pid=6112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.301000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:40:07.301000 audit: BPF prog-id=223 op=UNLOAD Jan 28 01:40:07.301000 audit[6112]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe04843950 a2=94 a3=54428f items=0 ppid=5767 pid=6112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.301000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:40:07.301000 audit: BPF prog-id=224 op=LOAD Jan 28 01:40:07.301000 audit[6112]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe04843980 a2=94 a3=2 items=0 ppid=5767 pid=6112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.301000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:40:07.301000 audit: BPF prog-id=224 op=UNLOAD Jan 28 01:40:07.301000 audit[6112]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe04843980 a2=0 a3=2 items=0 ppid=5767 pid=6112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.301000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:40:07.301000 audit: BPF prog-id=225 op=LOAD Jan 28 01:40:07.301000 audit[6112]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe04843730 a2=94 a3=4 items=0 ppid=5767 pid=6112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.301000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:40:07.301000 audit: BPF prog-id=225 op=UNLOAD Jan 28 01:40:07.301000 audit[6112]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe04843730 a2=94 a3=4 items=0 ppid=5767 pid=6112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.301000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:40:07.301000 audit: BPF prog-id=226 op=LOAD Jan 28 01:40:07.301000 audit[6112]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe04843830 a2=94 a3=7ffe048439b0 items=0 ppid=5767 pid=6112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.301000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:40:07.301000 audit: BPF prog-id=226 op=UNLOAD Jan 28 01:40:07.301000 audit[6112]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe04843830 a2=0 a3=7ffe048439b0 items=0 ppid=5767 pid=6112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.301000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:40:07.305000 audit: BPF prog-id=227 op=LOAD Jan 28 01:40:07.305000 audit[6112]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe04842f60 a2=94 a3=2 items=0 ppid=5767 pid=6112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.305000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:40:07.305000 audit: BPF prog-id=227 op=UNLOAD Jan 28 01:40:07.305000 audit[6112]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe04842f60 a2=0 a3=2 items=0 ppid=5767 pid=6112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.305000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:40:07.374000 audit: BPF prog-id=228 op=LOAD Jan 28 01:40:07.374000 audit[6112]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe04843060 a2=94 a3=30 items=0 ppid=5767 pid=6112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.374000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:40:07.394840 containerd[1612]: time="2026-01-28T01:40:07.394458703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6cpzt,Uid:f0234091-1bfb-4c2b-914c-35e344cefc9d,Namespace:kube-system,Attempt:0,}" Jan 28 01:40:07.401558 containerd[1612]: time="2026-01-28T01:40:07.399156700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-57jrt,Uid:df1949f7-cac3-4cf6-8c60-f8d963a49163,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:40:07.445000 audit: BPF prog-id=229 op=LOAD Jan 28 01:40:07.445000 audit[6128]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd9f21230 a2=98 a3=0 items=0 ppid=5767 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.445000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:40:07.448000 audit: BPF prog-id=229 op=UNLOAD Jan 28 01:40:07.448000 audit[6128]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcd9f21200 a3=0 items=0 ppid=5767 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.448000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:40:07.455000 audit: BPF prog-id=230 op=LOAD Jan 28 01:40:07.455000 audit[6128]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcd9f21020 a2=94 a3=54428f items=0 ppid=5767 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.455000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:40:07.455000 audit: BPF prog-id=230 op=UNLOAD Jan 28 01:40:07.455000 audit[6128]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcd9f21020 a2=94 a3=54428f items=0 ppid=5767 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.455000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:40:07.455000 audit: BPF prog-id=231 op=LOAD Jan 28 01:40:07.455000 audit[6128]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcd9f21050 a2=94 a3=2 items=0 ppid=5767 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.455000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:40:07.455000 audit: BPF prog-id=231 op=UNLOAD Jan 28 01:40:07.455000 audit[6128]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcd9f21050 a2=0 a3=2 items=0 ppid=5767 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:07.455000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:40:07.493218 containerd[1612]: time="2026-01-28T01:40:07.492827350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2lrhs,Uid:d50b0b36-811a-467b-a5ed-e0483bb76784,Namespace:kube-system,Attempt:0,} returns sandbox id \"189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b\"" Jan 28 01:40:07.494466 kubelet[2938]: E0128 01:40:07.494231 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:40:07.535877 containerd[1612]: time="2026-01-28T01:40:07.535829574Z" level=info msg="CreateContainer within sandbox \"189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:40:07.639809 systemd-networkd[1516]: calid9a05b5ad66: Gained IPv6LL Jan 28 01:40:07.763147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1015165362.mount: Deactivated successfully. Jan 28 01:40:07.852866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1451853672.mount: Deactivated successfully. Jan 28 01:40:07.870957 containerd[1612]: time="2026-01-28T01:40:07.866578472Z" level=info msg="Container 154b3f4ec824fc68428b45fa5debb78b08f3f41ba91fb182f99ae76ce5ea77c0: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:40:08.119534 containerd[1612]: time="2026-01-28T01:40:08.118737468Z" level=info msg="CreateContainer within sandbox \"189fa7f18863c76915eebdee76a581e49525dd4a1ee315930473c38c2333cb0b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"154b3f4ec824fc68428b45fa5debb78b08f3f41ba91fb182f99ae76ce5ea77c0\"" Jan 28 01:40:08.126518 containerd[1612]: time="2026-01-28T01:40:08.123035987Z" level=info msg="StartContainer for \"154b3f4ec824fc68428b45fa5debb78b08f3f41ba91fb182f99ae76ce5ea77c0\"" Jan 28 01:40:08.126518 containerd[1612]: time="2026-01-28T01:40:08.126205301Z" level=info msg="connecting to shim 154b3f4ec824fc68428b45fa5debb78b08f3f41ba91fb182f99ae76ce5ea77c0" address="unix:///run/containerd/s/2096130ef8f0e1b9301d5473a9f0a4c6300378c8dbf978819aba29b867a1fd7e" protocol=ttrpc version=3 Jan 28 01:40:08.294000 audit: BPF prog-id=232 op=LOAD Jan 28 01:40:08.294000 audit[6128]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcd9f20f10 a2=94 a3=1 items=0 ppid=5767 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:08.294000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:40:08.296000 audit: BPF prog-id=232 op=UNLOAD Jan 28 01:40:08.296000 audit[6128]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcd9f20f10 a2=94 a3=1 items=0 ppid=5767 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:08.296000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:40:08.339692 systemd-networkd[1516]: vxlan.calico: Gained IPv6LL Jan 28 01:40:08.347684 systemd[1]: Started cri-containerd-154b3f4ec824fc68428b45fa5debb78b08f3f41ba91fb182f99ae76ce5ea77c0.scope - libcontainer container 154b3f4ec824fc68428b45fa5debb78b08f3f41ba91fb182f99ae76ce5ea77c0. Jan 28 01:40:08.349000 audit: BPF prog-id=233 op=LOAD Jan 28 01:40:08.349000 audit[6128]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcd9f20f00 a2=94 a3=4 items=0 ppid=5767 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:08.349000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:40:08.351000 audit: BPF prog-id=233 op=UNLOAD Jan 28 01:40:08.351000 audit[6128]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcd9f20f00 a2=0 a3=4 items=0 ppid=5767 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:08.351000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:40:08.351000 audit: BPF prog-id=234 op=LOAD Jan 28 01:40:08.351000 audit[6128]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcd9f20d60 a2=94 a3=5 items=0 ppid=5767 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:08.351000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:40:08.351000 audit: BPF prog-id=234 op=UNLOAD Jan 28 01:40:08.351000 audit[6128]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcd9f20d60 a2=0 a3=5 items=0 ppid=5767 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:08.351000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:40:08.351000 audit: BPF prog-id=235 op=LOAD Jan 28 01:40:08.351000 audit[6128]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcd9f20f80 a2=94 a3=6 items=0 ppid=5767 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:08.351000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:40:08.351000 audit: BPF prog-id=235 op=UNLOAD Jan 28 01:40:08.351000 audit[6128]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcd9f20f80 a2=0 a3=6 items=0 ppid=5767 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:08.351000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:40:08.371921 kubelet[2938]: E0128 01:40:08.371061 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:40:08.371000 audit: BPF prog-id=236 op=LOAD Jan 28 01:40:08.371000 audit[6128]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcd9f20730 a2=94 a3=88 items=0 ppid=5767 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:08.371000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:40:08.371000 audit: BPF prog-id=237 op=LOAD Jan 28 01:40:08.371000 audit[6128]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffcd9f205b0 a2=94 a3=2 items=0 ppid=5767 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:08.371000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:40:08.371000 audit: BPF prog-id=237 op=UNLOAD Jan 28 01:40:08.371000 audit[6128]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffcd9f205e0 a2=0 a3=7ffcd9f206e0 items=0 ppid=5767 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:08.371000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:40:08.383000 audit: BPF prog-id=236 op=UNLOAD Jan 28 01:40:08.383000 audit[6128]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=33269d10 a2=0 a3=826329e255b7e542 items=0 ppid=5767 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:08.383000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:40:08.392701 containerd[1612]: time="2026-01-28T01:40:08.392466056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78b6655f44-dr84p,Uid:477c43dc-f740-4bfd-b59c-255fe52c8673,Namespace:calico-system,Attempt:0,}" Jan 28 01:40:08.471000 audit: BPF prog-id=228 op=UNLOAD Jan 28 01:40:08.471000 audit[5767]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0009c4340 a2=0 a3=0 items=0 ppid=5698 pid=5767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:08.471000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 28 01:40:08.488866 systemd-networkd[1516]: cali8ec43be8e73: Link UP Jan 28 01:40:08.537000 audit: BPF prog-id=238 op=LOAD Jan 28 01:40:08.555643 systemd-networkd[1516]: cali8ec43be8e73: Gained carrier Jan 28 01:40:08.568000 audit: BPF prog-id=239 op=LOAD Jan 28 01:40:08.568000 audit[6169]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000aa238 a2=98 a3=0 items=0 ppid=6067 pid=6169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:08.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135346233663465633832346663363834323862343566613564656262 Jan 28 01:40:08.574000 audit: BPF prog-id=239 op=UNLOAD Jan 28 01:40:08.574000 audit[6169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6067 pid=6169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:08.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135346233663465633832346663363834323862343566613564656262 Jan 28 01:40:08.575000 audit: BPF prog-id=240 op=LOAD Jan 28 01:40:08.575000 audit[6169]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000aa488 a2=98 a3=0 items=0 ppid=6067 pid=6169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:08.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135346233663465633832346663363834323862343566613564656262 Jan 28 01:40:08.575000 audit: BPF prog-id=241 op=LOAD Jan 28 01:40:08.575000 audit[6169]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0000aa218 a2=98 a3=0 items=0 ppid=6067 pid=6169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:08.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135346233663465633832346663363834323862343566613564656262 Jan 28 01:40:08.575000 audit: BPF prog-id=241 op=UNLOAD Jan 28 01:40:08.575000 audit[6169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6067 pid=6169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:08.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135346233663465633832346663363834323862343566613564656262 Jan 28 01:40:08.575000 audit: BPF prog-id=240 op=UNLOAD Jan 28 01:40:08.575000 audit[6169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6067 pid=6169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:08.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135346233663465633832346663363834323862343566613564656262 Jan 28 01:40:08.575000 audit: BPF prog-id=242 op=LOAD Jan 28 01:40:08.575000 audit[6169]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000aa6e8 a2=98 a3=0 items=0 ppid=6067 pid=6169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:08.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135346233663465633832346663363834323862343566613564656262 Jan 28 01:40:08.690516 containerd[1612]: 2026-01-28 01:40:07.847 [INFO][6129] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7fcc88c58b--57jrt-eth0 calico-apiserver-7fcc88c58b- calico-apiserver df1949f7-cac3-4cf6-8c60-f8d963a49163 1229 0 2026-01-28 01:36:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fcc88c58b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7fcc88c58b-57jrt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8ec43be8e73 [] [] }} ContainerID="4ffd5ac7a0a4928617a6a6c303788790d30c28df6d55bf6e2077bd55c71dbca3" Namespace="calico-apiserver" Pod="calico-apiserver-7fcc88c58b-57jrt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcc88c58b--57jrt-" Jan 28 01:40:08.690516 containerd[1612]: 2026-01-28 01:40:07.848 [INFO][6129] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ffd5ac7a0a4928617a6a6c303788790d30c28df6d55bf6e2077bd55c71dbca3" Namespace="calico-apiserver" Pod="calico-apiserver-7fcc88c58b-57jrt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcc88c58b--57jrt-eth0" Jan 28 01:40:08.690516 containerd[1612]: 2026-01-28 01:40:08.220 [INFO][6160] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ffd5ac7a0a4928617a6a6c303788790d30c28df6d55bf6e2077bd55c71dbca3" HandleID="k8s-pod-network.4ffd5ac7a0a4928617a6a6c303788790d30c28df6d55bf6e2077bd55c71dbca3" Workload="localhost-k8s-calico--apiserver--7fcc88c58b--57jrt-eth0" Jan 28 01:40:08.690516 containerd[1612]: 2026-01-28 01:40:08.220 [INFO][6160] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4ffd5ac7a0a4928617a6a6c303788790d30c28df6d55bf6e2077bd55c71dbca3" HandleID="k8s-pod-network.4ffd5ac7a0a4928617a6a6c303788790d30c28df6d55bf6e2077bd55c71dbca3" Workload="localhost-k8s-calico--apiserver--7fcc88c58b--57jrt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d540), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7fcc88c58b-57jrt", "timestamp":"2026-01-28 01:40:08.220215409 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:40:08.690516 containerd[1612]: 2026-01-28 01:40:08.220 [INFO][6160] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:40:08.690516 containerd[1612]: 2026-01-28 01:40:08.220 [INFO][6160] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:40:08.690516 containerd[1612]: 2026-01-28 01:40:08.220 [INFO][6160] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:40:08.690516 containerd[1612]: 2026-01-28 01:40:08.249 [INFO][6160] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4ffd5ac7a0a4928617a6a6c303788790d30c28df6d55bf6e2077bd55c71dbca3" host="localhost" Jan 28 01:40:08.690516 containerd[1612]: 2026-01-28 01:40:08.278 [INFO][6160] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:40:08.690516 containerd[1612]: 2026-01-28 01:40:08.313 [INFO][6160] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:40:08.690516 containerd[1612]: 2026-01-28 01:40:08.318 [INFO][6160] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:40:08.690516 containerd[1612]: 2026-01-28 01:40:08.331 [INFO][6160] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:40:08.690516 containerd[1612]: 2026-01-28 01:40:08.331 [INFO][6160] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4ffd5ac7a0a4928617a6a6c303788790d30c28df6d55bf6e2077bd55c71dbca3" host="localhost" Jan 28 01:40:08.690516 containerd[1612]: 2026-01-28 01:40:08.344 [INFO][6160] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4ffd5ac7a0a4928617a6a6c303788790d30c28df6d55bf6e2077bd55c71dbca3 Jan 28 01:40:08.690516 containerd[1612]: 2026-01-28 01:40:08.367 [INFO][6160] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4ffd5ac7a0a4928617a6a6c303788790d30c28df6d55bf6e2077bd55c71dbca3" host="localhost" Jan 28 01:40:08.690516 containerd[1612]: 2026-01-28 01:40:08.437 [INFO][6160] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.4ffd5ac7a0a4928617a6a6c303788790d30c28df6d55bf6e2077bd55c71dbca3" host="localhost" Jan 28 01:40:08.690516 containerd[1612]: 2026-01-28 01:40:08.437 [INFO][6160] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.4ffd5ac7a0a4928617a6a6c303788790d30c28df6d55bf6e2077bd55c71dbca3" host="localhost" Jan 28 01:40:08.690516 containerd[1612]: 2026-01-28 01:40:08.437 [INFO][6160] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:40:08.690516 containerd[1612]: 2026-01-28 01:40:08.437 [INFO][6160] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="4ffd5ac7a0a4928617a6a6c303788790d30c28df6d55bf6e2077bd55c71dbca3" HandleID="k8s-pod-network.4ffd5ac7a0a4928617a6a6c303788790d30c28df6d55bf6e2077bd55c71dbca3" Workload="localhost-k8s-calico--apiserver--7fcc88c58b--57jrt-eth0" Jan 28 01:40:08.702708 containerd[1612]: 2026-01-28 01:40:08.456 [INFO][6129] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ffd5ac7a0a4928617a6a6c303788790d30c28df6d55bf6e2077bd55c71dbca3" Namespace="calico-apiserver" Pod="calico-apiserver-7fcc88c58b-57jrt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcc88c58b--57jrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7fcc88c58b--57jrt-eth0", GenerateName:"calico-apiserver-7fcc88c58b-", Namespace:"calico-apiserver", SelfLink:"", UID:"df1949f7-cac3-4cf6-8c60-f8d963a49163", ResourceVersion:"1229", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 36, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fcc88c58b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7fcc88c58b-57jrt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8ec43be8e73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:40:08.702708 containerd[1612]: 2026-01-28 01:40:08.457 [INFO][6129] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="4ffd5ac7a0a4928617a6a6c303788790d30c28df6d55bf6e2077bd55c71dbca3" Namespace="calico-apiserver" Pod="calico-apiserver-7fcc88c58b-57jrt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcc88c58b--57jrt-eth0" Jan 28 01:40:08.702708 containerd[1612]: 2026-01-28 01:40:08.457 [INFO][6129] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ec43be8e73 ContainerID="4ffd5ac7a0a4928617a6a6c303788790d30c28df6d55bf6e2077bd55c71dbca3" Namespace="calico-apiserver" Pod="calico-apiserver-7fcc88c58b-57jrt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcc88c58b--57jrt-eth0" Jan 28 01:40:08.702708 containerd[1612]: 2026-01-28 01:40:08.560 [INFO][6129] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ffd5ac7a0a4928617a6a6c303788790d30c28df6d55bf6e2077bd55c71dbca3" Namespace="calico-apiserver" Pod="calico-apiserver-7fcc88c58b-57jrt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcc88c58b--57jrt-eth0" Jan 28 01:40:08.702708 containerd[1612]: 2026-01-28 01:40:08.563 [INFO][6129] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ffd5ac7a0a4928617a6a6c303788790d30c28df6d55bf6e2077bd55c71dbca3" Namespace="calico-apiserver" Pod="calico-apiserver-7fcc88c58b-57jrt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcc88c58b--57jrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7fcc88c58b--57jrt-eth0", GenerateName:"calico-apiserver-7fcc88c58b-", Namespace:"calico-apiserver", SelfLink:"", UID:"df1949f7-cac3-4cf6-8c60-f8d963a49163", ResourceVersion:"1229", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 36, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fcc88c58b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ffd5ac7a0a4928617a6a6c303788790d30c28df6d55bf6e2077bd55c71dbca3", Pod:"calico-apiserver-7fcc88c58b-57jrt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8ec43be8e73", MAC:"be:4f:e6:b7:2f:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:40:08.702708 containerd[1612]: 2026-01-28 01:40:08.631 [INFO][6129] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ffd5ac7a0a4928617a6a6c303788790d30c28df6d55bf6e2077bd55c71dbca3" Namespace="calico-apiserver" Pod="calico-apiserver-7fcc88c58b-57jrt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcc88c58b--57jrt-eth0" Jan 28 01:40:09.017125 containerd[1612]: time="2026-01-28T01:40:09.016897325Z" level=info msg="connecting to shim 4ffd5ac7a0a4928617a6a6c303788790d30c28df6d55bf6e2077bd55c71dbca3" address="unix:///run/containerd/s/c1a34bc4b4a12caf6c84bc466f98cf6aa251ec82f4497d2f3e67a5573bc5ce98" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:40:09.219435 containerd[1612]: time="2026-01-28T01:40:09.219106095Z" level=info msg="StartContainer for \"154b3f4ec824fc68428b45fa5debb78b08f3f41ba91fb182f99ae76ce5ea77c0\" returns successfully" Jan 28 01:40:09.276826 systemd-networkd[1516]: calib9d37429b8f: Link UP Jan 28 01:40:09.304398 systemd-networkd[1516]: calib9d37429b8f: Gained carrier Jan 28 01:40:09.344468 systemd[1]: Started cri-containerd-4ffd5ac7a0a4928617a6a6c303788790d30c28df6d55bf6e2077bd55c71dbca3.scope - libcontainer container 4ffd5ac7a0a4928617a6a6c303788790d30c28df6d55bf6e2077bd55c71dbca3. Jan 28 01:40:09.428411 containerd[1612]: 2026-01-28 01:40:08.279 [INFO][6131] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--6cpzt-eth0 coredns-668d6bf9bc- kube-system f0234091-1bfb-4c2b-914c-35e344cefc9d 1226 0 2026-01-28 01:35:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-6cpzt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib9d37429b8f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558" Namespace="kube-system" Pod="coredns-668d6bf9bc-6cpzt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6cpzt-" Jan 28 01:40:09.428411 containerd[1612]: 2026-01-28 01:40:08.279 [INFO][6131] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558" Namespace="kube-system" Pod="coredns-668d6bf9bc-6cpzt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6cpzt-eth0" Jan 28 01:40:09.428411 containerd[1612]: 2026-01-28 01:40:08.587 [INFO][6182] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558" HandleID="k8s-pod-network.c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558" Workload="localhost-k8s-coredns--668d6bf9bc--6cpzt-eth0" Jan 28 01:40:09.428411 containerd[1612]: 2026-01-28 01:40:08.591 [INFO][6182] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558" HandleID="k8s-pod-network.c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558" Workload="localhost-k8s-coredns--668d6bf9bc--6cpzt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000388830), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-6cpzt", "timestamp":"2026-01-28 01:40:08.587986033 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:40:09.428411 containerd[1612]: 2026-01-28 01:40:08.591 [INFO][6182] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:40:09.428411 containerd[1612]: 2026-01-28 01:40:08.591 [INFO][6182] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:40:09.428411 containerd[1612]: 2026-01-28 01:40:08.591 [INFO][6182] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:40:09.428411 containerd[1612]: 2026-01-28 01:40:08.666 [INFO][6182] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558" host="localhost" Jan 28 01:40:09.428411 containerd[1612]: 2026-01-28 01:40:08.719 [INFO][6182] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:40:09.428411 containerd[1612]: 2026-01-28 01:40:08.822 [INFO][6182] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:40:09.428411 containerd[1612]: 2026-01-28 01:40:08.842 [INFO][6182] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:40:09.428411 containerd[1612]: 2026-01-28 01:40:08.852 [INFO][6182] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:40:09.428411 containerd[1612]: 2026-01-28 01:40:08.852 [INFO][6182] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558" host="localhost" Jan 28 01:40:09.428411 containerd[1612]: 2026-01-28 01:40:08.863 [INFO][6182] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558 Jan 28 01:40:09.428411 containerd[1612]: 2026-01-28 01:40:08.905 [INFO][6182] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558" host="localhost" Jan 28 01:40:09.428411 containerd[1612]: 2026-01-28 01:40:09.094 [INFO][6182] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558" host="localhost" Jan 28 01:40:09.428411 containerd[1612]: 2026-01-28 01:40:09.117 [INFO][6182] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558" host="localhost" Jan 28 01:40:09.428411 containerd[1612]: 2026-01-28 01:40:09.148 [INFO][6182] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:40:09.428411 containerd[1612]: 2026-01-28 01:40:09.149 [INFO][6182] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558" HandleID="k8s-pod-network.c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558" Workload="localhost-k8s-coredns--668d6bf9bc--6cpzt-eth0" Jan 28 01:40:09.429358 containerd[1612]: 2026-01-28 01:40:09.219 [INFO][6131] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558" Namespace="kube-system" Pod="coredns-668d6bf9bc-6cpzt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6cpzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--6cpzt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f0234091-1bfb-4c2b-914c-35e344cefc9d", ResourceVersion:"1226", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 35, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-6cpzt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib9d37429b8f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:40:09.429358 containerd[1612]: 2026-01-28 01:40:09.220 [INFO][6131] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558" Namespace="kube-system" Pod="coredns-668d6bf9bc-6cpzt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6cpzt-eth0" Jan 28 01:40:09.429358 containerd[1612]: 2026-01-28 01:40:09.220 [INFO][6131] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9d37429b8f ContainerID="c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558" Namespace="kube-system" Pod="coredns-668d6bf9bc-6cpzt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6cpzt-eth0" Jan 28 01:40:09.429358 containerd[1612]: 2026-01-28 01:40:09.306 [INFO][6131] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558" Namespace="kube-system" Pod="coredns-668d6bf9bc-6cpzt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6cpzt-eth0" Jan 28 01:40:09.429358 containerd[1612]: 2026-01-28 01:40:09.324 [INFO][6131] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558" Namespace="kube-system" Pod="coredns-668d6bf9bc-6cpzt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6cpzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--6cpzt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f0234091-1bfb-4c2b-914c-35e344cefc9d", ResourceVersion:"1226", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 35, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558", Pod:"coredns-668d6bf9bc-6cpzt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib9d37429b8f", MAC:"b6:e6:86:e5:d2:15", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:40:09.429358 containerd[1612]: 2026-01-28 01:40:09.405 [INFO][6131] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558" Namespace="kube-system" Pod="coredns-668d6bf9bc-6cpzt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6cpzt-eth0" Jan 28 01:40:09.554000 audit[6309]: NETFILTER_CFG table=nat:127 family=2 entries=15 op=nft_register_chain pid=6309 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:40:09.554000 audit[6309]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc4ac2bd70 a2=0 a3=7ffc4ac2bd5c items=0 ppid=5767 pid=6309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:09.554000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:40:09.574000 audit[6311]: NETFILTER_CFG table=mangle:128 family=2 entries=16 op=nft_register_chain pid=6311 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:40:09.574000 audit[6311]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe03467b40 a2=0 a3=7ffe03467b2c items=0 ppid=5767 pid=6311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:09.574000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:40:09.595000 audit[6308]: NETFILTER_CFG table=raw:129 family=2 entries=21 op=nft_register_chain pid=6308 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:40:09.595000 audit[6308]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffdf07276a0 a2=0 a3=7ffdf072768c items=0 ppid=5767 pid=6308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:09.595000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:40:09.680000 audit: BPF prog-id=243 op=LOAD Jan 28 01:40:09.681825 containerd[1612]: time="2026-01-28T01:40:09.681043020Z" level=info msg="connecting to shim c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558" address="unix:///run/containerd/s/1edaf374fa29e6bd57a6c0b82e589c9e203d07ed9ef7dc8dd21832bfb3ab7d9f" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:40:09.685000 audit: BPF prog-id=244 op=LOAD Jan 28 01:40:09.685000 audit[6264]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=6247 pid=6264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:09.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466666435616337613061343932383631376136613663333033373838 Jan 28 01:40:09.686000 audit: BPF prog-id=244 op=UNLOAD Jan 28 01:40:09.686000 audit[6264]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6247 pid=6264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:09.686000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466666435616337613061343932383631376136613663333033373838 Jan 28 01:40:09.687000 audit: BPF prog-id=245 op=LOAD Jan 28 01:40:09.687000 audit[6264]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=6247 pid=6264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:09.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466666435616337613061343932383631376136613663333033373838 Jan 28 01:40:09.688000 audit: BPF prog-id=246 op=LOAD Jan 28 01:40:09.688000 audit[6264]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=6247 pid=6264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:09.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466666435616337613061343932383631376136613663333033373838 Jan 28 01:40:09.688000 audit: BPF prog-id=246 op=UNLOAD Jan 28 01:40:09.688000 audit[6264]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6247 pid=6264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:09.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466666435616337613061343932383631376136613663333033373838 Jan 28 01:40:09.690000 audit: BPF prog-id=245 op=UNLOAD Jan 28 01:40:09.690000 audit[6264]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6247 pid=6264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:09.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466666435616337613061343932383631376136613663333033373838 Jan 28 01:40:09.691000 audit: BPF prog-id=247 op=LOAD Jan 28 01:40:09.691000 audit[6264]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=6247 pid=6264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:09.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466666435616337613061343932383631376136613663333033373838 Jan 28 01:40:09.718418 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:40:09.796175 systemd[1]: Started cri-containerd-c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558.scope - libcontainer container c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558. Jan 28 01:40:09.891000 audit: BPF prog-id=248 op=LOAD Jan 28 01:40:09.892000 audit: BPF prog-id=249 op=LOAD Jan 28 01:40:09.892000 audit[6342]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=6327 pid=6342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:09.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335623530356434616233343737333531393166316664653233386538 Jan 28 01:40:09.892000 audit: BPF prog-id=249 op=UNLOAD Jan 28 01:40:09.892000 audit[6342]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6327 pid=6342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:09.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335623530356434616233343737333531393166316664653233386538 Jan 28 01:40:09.892000 audit: BPF prog-id=250 op=LOAD Jan 28 01:40:09.892000 audit[6342]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=6327 pid=6342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:09.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335623530356434616233343737333531393166316664653233386538 Jan 28 01:40:09.892000 audit: BPF prog-id=251 op=LOAD Jan 28 01:40:09.892000 audit[6342]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=6327 pid=6342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:09.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335623530356434616233343737333531393166316664653233386538 Jan 28 01:40:09.892000 audit: BPF prog-id=251 op=UNLOAD Jan 28 01:40:09.892000 audit[6342]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6327 pid=6342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:09.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335623530356434616233343737333531393166316664653233386538 Jan 28 01:40:09.892000 audit: BPF prog-id=250 op=UNLOAD Jan 28 01:40:09.892000 audit[6342]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6327 pid=6342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:09.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335623530356434616233343737333531393166316664653233386538 Jan 28 01:40:09.892000 audit: BPF prog-id=252 op=LOAD Jan 28 01:40:09.892000 audit[6342]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=6327 pid=6342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:09.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335623530356434616233343737333531393166316664653233386538 Jan 28 01:40:09.787000 audit[6337]: NETFILTER_CFG table=filter:130 family=2 entries=192 op=nft_register_chain pid=6337 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:40:09.787000 audit[6337]: SYSCALL arch=c000003e syscall=46 success=yes exit=111724 a0=3 a1=7ffc7b335560 a2=0 a3=7ffc7b33554c items=0 ppid=5767 pid=6337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:09.787000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:40:09.897127 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:40:09.904486 systemd-networkd[1516]: cali7b529bec867: Link UP Jan 28 01:40:09.915089 systemd-networkd[1516]: cali7b529bec867: Gained carrier Jan 28 01:40:09.946600 containerd[1612]: time="2026-01-28T01:40:09.944073909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-57jrt,Uid:df1949f7-cac3-4cf6-8c60-f8d963a49163,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4ffd5ac7a0a4928617a6a6c303788790d30c28df6d55bf6e2077bd55c71dbca3\"" Jan 28 01:40:09.976397 containerd[1612]: time="2026-01-28T01:40:09.975715804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:40:10.041498 containerd[1612]: 2026-01-28 01:40:08.895 [INFO][6200] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--78b6655f44--dr84p-eth0 calico-kube-controllers-78b6655f44- calico-system 477c43dc-f740-4bfd-b59c-255fe52c8673 1237 0 2026-01-28 01:37:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78b6655f44 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-78b6655f44-dr84p eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7b529bec867 [] [] }} ContainerID="d16c921a2940f8a36f62f89e9856c3b3e40994137fbb62764a7eca57f7817676" Namespace="calico-system" Pod="calico-kube-controllers-78b6655f44-dr84p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78b6655f44--dr84p-" Jan 28 01:40:10.041498 containerd[1612]: 2026-01-28 01:40:08.896 [INFO][6200] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d16c921a2940f8a36f62f89e9856c3b3e40994137fbb62764a7eca57f7817676" Namespace="calico-system" Pod="calico-kube-controllers-78b6655f44-dr84p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78b6655f44--dr84p-eth0" Jan 28 01:40:10.041498 containerd[1612]: 2026-01-28 01:40:09.567 [INFO][6256] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d16c921a2940f8a36f62f89e9856c3b3e40994137fbb62764a7eca57f7817676" HandleID="k8s-pod-network.d16c921a2940f8a36f62f89e9856c3b3e40994137fbb62764a7eca57f7817676" Workload="localhost-k8s-calico--kube--controllers--78b6655f44--dr84p-eth0" Jan 28 01:40:10.041498 containerd[1612]: 2026-01-28 01:40:09.568 [INFO][6256] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d16c921a2940f8a36f62f89e9856c3b3e40994137fbb62764a7eca57f7817676" HandleID="k8s-pod-network.d16c921a2940f8a36f62f89e9856c3b3e40994137fbb62764a7eca57f7817676" Workload="localhost-k8s-calico--kube--controllers--78b6655f44--dr84p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e690), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-78b6655f44-dr84p", "timestamp":"2026-01-28 01:40:09.567945801 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:40:10.041498 containerd[1612]: 2026-01-28 01:40:09.568 [INFO][6256] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:40:10.041498 containerd[1612]: 2026-01-28 01:40:09.569 [INFO][6256] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:40:10.041498 containerd[1612]: 2026-01-28 01:40:09.569 [INFO][6256] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:40:10.041498 containerd[1612]: 2026-01-28 01:40:09.594 [INFO][6256] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d16c921a2940f8a36f62f89e9856c3b3e40994137fbb62764a7eca57f7817676" host="localhost" Jan 28 01:40:10.041498 containerd[1612]: 2026-01-28 01:40:09.698 [INFO][6256] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:40:10.041498 containerd[1612]: 2026-01-28 01:40:09.765 [INFO][6256] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:40:10.041498 containerd[1612]: 2026-01-28 01:40:09.780 [INFO][6256] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:40:10.041498 containerd[1612]: 2026-01-28 01:40:09.786 [INFO][6256] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:40:10.041498 containerd[1612]: 2026-01-28 01:40:09.792 [INFO][6256] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d16c921a2940f8a36f62f89e9856c3b3e40994137fbb62764a7eca57f7817676" host="localhost" Jan 28 01:40:10.041498 containerd[1612]: 2026-01-28 01:40:09.799 [INFO][6256] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d16c921a2940f8a36f62f89e9856c3b3e40994137fbb62764a7eca57f7817676 Jan 28 01:40:10.041498 containerd[1612]: 2026-01-28 01:40:09.821 [INFO][6256] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d16c921a2940f8a36f62f89e9856c3b3e40994137fbb62764a7eca57f7817676" host="localhost" Jan 28 01:40:10.041498 containerd[1612]: 2026-01-28 01:40:09.870 [INFO][6256] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.d16c921a2940f8a36f62f89e9856c3b3e40994137fbb62764a7eca57f7817676" host="localhost" Jan 28 01:40:10.041498 containerd[1612]: 2026-01-28 01:40:09.870 [INFO][6256] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.d16c921a2940f8a36f62f89e9856c3b3e40994137fbb62764a7eca57f7817676" host="localhost" Jan 28 01:40:10.041498 containerd[1612]: 2026-01-28 01:40:09.870 [INFO][6256] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:40:10.041498 containerd[1612]: 2026-01-28 01:40:09.870 [INFO][6256] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="d16c921a2940f8a36f62f89e9856c3b3e40994137fbb62764a7eca57f7817676" HandleID="k8s-pod-network.d16c921a2940f8a36f62f89e9856c3b3e40994137fbb62764a7eca57f7817676" Workload="localhost-k8s-calico--kube--controllers--78b6655f44--dr84p-eth0" Jan 28 01:40:10.042883 containerd[1612]: 2026-01-28 01:40:09.885 [INFO][6200] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d16c921a2940f8a36f62f89e9856c3b3e40994137fbb62764a7eca57f7817676" Namespace="calico-system" Pod="calico-kube-controllers-78b6655f44-dr84p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78b6655f44--dr84p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78b6655f44--dr84p-eth0", GenerateName:"calico-kube-controllers-78b6655f44-", Namespace:"calico-system", SelfLink:"", UID:"477c43dc-f740-4bfd-b59c-255fe52c8673", ResourceVersion:"1237", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 37, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78b6655f44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-78b6655f44-dr84p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7b529bec867", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:40:10.042883 containerd[1612]: 2026-01-28 01:40:09.886 [INFO][6200] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="d16c921a2940f8a36f62f89e9856c3b3e40994137fbb62764a7eca57f7817676" Namespace="calico-system" Pod="calico-kube-controllers-78b6655f44-dr84p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78b6655f44--dr84p-eth0" Jan 28 01:40:10.042883 containerd[1612]: 2026-01-28 01:40:09.886 [INFO][6200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b529bec867 ContainerID="d16c921a2940f8a36f62f89e9856c3b3e40994137fbb62764a7eca57f7817676" Namespace="calico-system" Pod="calico-kube-controllers-78b6655f44-dr84p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78b6655f44--dr84p-eth0" Jan 28 01:40:10.042883 containerd[1612]: 2026-01-28 01:40:09.920 [INFO][6200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d16c921a2940f8a36f62f89e9856c3b3e40994137fbb62764a7eca57f7817676" Namespace="calico-system" Pod="calico-kube-controllers-78b6655f44-dr84p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78b6655f44--dr84p-eth0" Jan 28 01:40:10.042883 containerd[1612]: 2026-01-28 01:40:09.921 [INFO][6200] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d16c921a2940f8a36f62f89e9856c3b3e40994137fbb62764a7eca57f7817676" Namespace="calico-system" Pod="calico-kube-controllers-78b6655f44-dr84p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78b6655f44--dr84p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78b6655f44--dr84p-eth0", GenerateName:"calico-kube-controllers-78b6655f44-", Namespace:"calico-system", SelfLink:"", UID:"477c43dc-f740-4bfd-b59c-255fe52c8673", ResourceVersion:"1237", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 37, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78b6655f44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d16c921a2940f8a36f62f89e9856c3b3e40994137fbb62764a7eca57f7817676", Pod:"calico-kube-controllers-78b6655f44-dr84p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7b529bec867", MAC:"32:c5:4e:2f:c0:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:40:10.042883 containerd[1612]: 2026-01-28 01:40:09.965 [INFO][6200] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d16c921a2940f8a36f62f89e9856c3b3e40994137fbb62764a7eca57f7817676" Namespace="calico-system" Pod="calico-kube-controllers-78b6655f44-dr84p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78b6655f44--dr84p-eth0" Jan 28 01:40:10.059000 audit[6381]: NETFILTER_CFG table=filter:131 family=2 entries=92 op=nft_register_chain pid=6381 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:40:10.059000 audit[6381]: SYSCALL arch=c000003e syscall=46 success=yes exit=49692 a0=3 a1=7ffe47142200 a2=0 a3=7ffe471421ec items=0 ppid=5767 pid=6381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:10.059000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:40:10.107670 containerd[1612]: time="2026-01-28T01:40:10.107433685Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:40:10.114582 containerd[1612]: time="2026-01-28T01:40:10.114427906Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:40:10.118937 kubelet[2938]: E0128 01:40:10.118895 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:40:10.120939 kubelet[2938]: E0128 01:40:10.119361 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:40:10.120939 kubelet[2938]: E0128 01:40:10.119518 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdbrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fcc88c58b-57jrt_calico-apiserver(df1949f7-cac3-4cf6-8c60-f8d963a49163): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:40:10.120939 kubelet[2938]: E0128 01:40:10.120682 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" podUID="df1949f7-cac3-4cf6-8c60-f8d963a49163" Jan 28 01:40:10.132193 containerd[1612]: time="2026-01-28T01:40:10.115816027Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:40:10.134182 systemd-networkd[1516]: cali8ec43be8e73: Gained IPv6LL Jan 28 01:40:10.141690 containerd[1612]: time="2026-01-28T01:40:10.128870855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6cpzt,Uid:f0234091-1bfb-4c2b-914c-35e344cefc9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558\"" Jan 28 01:40:10.153708 kubelet[2938]: E0128 01:40:10.153675 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:40:10.164000 audit[6393]: NETFILTER_CFG table=filter:132 family=2 entries=52 op=nft_register_chain pid=6393 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:40:10.164000 audit[6393]: SYSCALL arch=c000003e syscall=46 success=yes exit=24312 a0=3 a1=7ffc97a9f2d0 a2=0 a3=7ffc97a9f2bc items=0 ppid=5767 pid=6393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:10.164000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:40:10.181355 containerd[1612]: time="2026-01-28T01:40:10.181106011Z" level=info msg="CreateContainer within sandbox \"c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:40:10.247785 containerd[1612]: time="2026-01-28T01:40:10.246057243Z" level=info msg="connecting to shim d16c921a2940f8a36f62f89e9856c3b3e40994137fbb62764a7eca57f7817676" address="unix:///run/containerd/s/8371223fb3a06d63733bac95fa9215a09c6c1f47c7443591017ba7c5161b5b5d" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:40:10.315434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2167290391.mount: Deactivated successfully. Jan 28 01:40:10.333529 containerd[1612]: time="2026-01-28T01:40:10.333153131Z" level=info msg="Container 41481e576b972df1c0cf1bd25fad12e37bf422101673feabbfb0c13eb94a816d: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:40:10.370863 containerd[1612]: time="2026-01-28T01:40:10.369459721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-n2mcr,Uid:38764aa9-f6ea-4a8f-ac0e-198fa6f97144,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:40:10.382846 kubelet[2938]: E0128 01:40:10.382685 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" podUID="df1949f7-cac3-4cf6-8c60-f8d963a49163" Jan 28 01:40:10.382846 kubelet[2938]: E0128 01:40:10.384800 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:40:10.425556 containerd[1612]: time="2026-01-28T01:40:10.424473157Z" level=info msg="CreateContainer within sandbox \"c5b505d4ab347735191f1fde238e8c467594e6eef7aadcfcfc97b92b82d79558\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"41481e576b972df1c0cf1bd25fad12e37bf422101673feabbfb0c13eb94a816d\"" Jan 28 01:40:10.440891 containerd[1612]: time="2026-01-28T01:40:10.434026375Z" level=info msg="StartContainer for \"41481e576b972df1c0cf1bd25fad12e37bf422101673feabbfb0c13eb94a816d\"" Jan 28 01:40:10.466519 containerd[1612]: time="2026-01-28T01:40:10.465226917Z" level=info msg="connecting to shim 41481e576b972df1c0cf1bd25fad12e37bf422101673feabbfb0c13eb94a816d" address="unix:///run/containerd/s/1edaf374fa29e6bd57a6c0b82e589c9e203d07ed9ef7dc8dd21832bfb3ab7d9f" protocol=ttrpc version=3 Jan 28 01:40:10.530000 audit[6446]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=6446 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:40:10.530000 audit[6446]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe40fdb890 a2=0 a3=7ffe40fdb87c items=0 ppid=3042 pid=6446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:10.530000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:40:10.573980 kubelet[2938]: I0128 01:40:10.573113 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2lrhs" podStartSLOduration=300.573089947 podStartE2EDuration="5m0.573089947s" podCreationTimestamp="2026-01-28 01:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:40:10.439052735 +0000 UTC m=+301.200525619" watchObservedRunningTime="2026-01-28 01:40:10.573089947 +0000 UTC m=+301.334562832" Jan 28 01:40:10.573666 systemd[1]: Started cri-containerd-d16c921a2940f8a36f62f89e9856c3b3e40994137fbb62764a7eca57f7817676.scope - libcontainer container d16c921a2940f8a36f62f89e9856c3b3e40994137fbb62764a7eca57f7817676. Jan 28 01:40:10.599000 audit[6446]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=6446 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:40:10.599000 audit[6446]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe40fdb890 a2=0 a3=0 items=0 ppid=3042 pid=6446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:10.599000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:40:10.680437 systemd[1]: Started cri-containerd-41481e576b972df1c0cf1bd25fad12e37bf422101673feabbfb0c13eb94a816d.scope - libcontainer container 41481e576b972df1c0cf1bd25fad12e37bf422101673feabbfb0c13eb94a816d. Jan 28 01:40:10.708843 systemd-networkd[1516]: calib9d37429b8f: Gained IPv6LL Jan 28 01:40:10.756000 audit: BPF prog-id=253 op=LOAD Jan 28 01:40:10.766000 audit: BPF prog-id=254 op=LOAD Jan 28 01:40:10.766000 audit[6437]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=6327 pid=6437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:10.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431343831653537366239373264663163306366316264323566616431 Jan 28 01:40:10.778000 audit: BPF prog-id=254 op=UNLOAD Jan 28 01:40:10.778000 audit[6437]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6327 pid=6437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:10.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431343831653537366239373264663163306366316264323566616431 Jan 28 01:40:10.778000 audit: BPF prog-id=255 op=LOAD Jan 28 01:40:10.778000 audit[6437]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=6327 pid=6437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:10.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431343831653537366239373264663163306366316264323566616431 Jan 28 01:40:10.778000 audit: BPF prog-id=256 op=LOAD Jan 28 01:40:10.778000 audit[6437]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=6327 pid=6437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:10.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431343831653537366239373264663163306366316264323566616431 Jan 28 01:40:10.778000 audit: BPF prog-id=256 op=UNLOAD Jan 28 01:40:10.778000 audit[6437]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6327 pid=6437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:10.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431343831653537366239373264663163306366316264323566616431 Jan 28 01:40:10.778000 audit: BPF prog-id=255 op=UNLOAD Jan 28 01:40:10.778000 audit[6437]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6327 pid=6437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:10.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431343831653537366239373264663163306366316264323566616431 Jan 28 01:40:10.778000 audit: BPF prog-id=257 op=LOAD Jan 28 01:40:10.778000 audit[6437]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=6327 pid=6437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:10.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431343831653537366239373264663163306366316264323566616431 Jan 28 01:40:10.914000 audit[6472]: NETFILTER_CFG table=filter:135 family=2 entries=17 op=nft_register_rule pid=6472 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:40:10.914000 audit[6472]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffff8c78f80 a2=0 a3=7ffff8c78f6c items=0 ppid=3042 pid=6472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:10.914000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:40:10.931000 audit[6472]: NETFILTER_CFG table=nat:136 family=2 entries=35 op=nft_register_chain pid=6472 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:40:10.931000 audit[6472]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffff8c78f80 a2=0 a3=7ffff8c78f6c items=0 ppid=3042 pid=6472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:10.931000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:40:10.939000 audit: BPF prog-id=258 op=LOAD Jan 28 01:40:10.939000 audit: BPF prog-id=259 op=LOAD Jan 28 01:40:10.939000 audit[6414]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000206238 a2=98 a3=0 items=0 ppid=6402 pid=6414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:10.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431366339323161323934306638613336663632663839653938353663 Jan 28 01:40:10.939000 audit: BPF prog-id=259 op=UNLOAD Jan 28 01:40:10.939000 audit[6414]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6402 pid=6414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:10.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431366339323161323934306638613336663632663839653938353663 Jan 28 01:40:10.946000 audit: BPF prog-id=260 op=LOAD Jan 28 01:40:10.946000 audit[6414]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000206488 a2=98 a3=0 items=0 ppid=6402 pid=6414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:10.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431366339323161323934306638613336663632663839653938353663 Jan 28 01:40:10.946000 audit: BPF prog-id=261 op=LOAD Jan 28 01:40:10.946000 audit[6414]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000206218 a2=98 a3=0 items=0 ppid=6402 pid=6414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:10.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431366339323161323934306638613336663632663839653938353663 Jan 28 01:40:10.947000 audit: BPF prog-id=261 op=UNLOAD Jan 28 01:40:10.947000 audit[6414]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6402 pid=6414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:10.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431366339323161323934306638613336663632663839653938353663 Jan 28 01:40:10.947000 audit: BPF prog-id=260 op=UNLOAD Jan 28 01:40:10.947000 audit[6414]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6402 pid=6414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:10.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431366339323161323934306638613336663632663839653938353663 Jan 28 01:40:10.948000 audit: BPF prog-id=262 op=LOAD Jan 28 01:40:10.948000 audit[6414]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002066e8 a2=98 a3=0 items=0 ppid=6402 pid=6414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:10.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431366339323161323934306638613336663632663839653938353663 Jan 28 01:40:11.586230 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:40:11.672600 systemd-networkd[1516]: cali7b529bec867: Gained IPv6LL Jan 28 01:40:11.770581 kubelet[2938]: E0128 01:40:11.769585 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:40:11.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.88:22-10.0.0.1:48136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:40:11.790912 systemd[1]: Started sshd@15-10.0.0.88:22-10.0.0.1:48136.service - OpenSSH per-connection server daemon (10.0.0.1:48136). Jan 28 01:40:11.860968 kubelet[2938]: E0128 01:40:11.854204 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" podUID="df1949f7-cac3-4cf6-8c60-f8d963a49163" Jan 28 01:40:12.189723 containerd[1612]: time="2026-01-28T01:40:12.189677075Z" level=info msg="StartContainer for \"41481e576b972df1c0cf1bd25fad12e37bf422101673feabbfb0c13eb94a816d\" returns successfully" Jan 28 01:40:12.681884 containerd[1612]: time="2026-01-28T01:40:12.680524419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78b6655f44-dr84p,Uid:477c43dc-f740-4bfd-b59c-255fe52c8673,Namespace:calico-system,Attempt:0,} returns sandbox id \"d16c921a2940f8a36f62f89e9856c3b3e40994137fbb62764a7eca57f7817676\"" Jan 28 01:40:12.698463 kernel: kauditd_printk_skb: 249 callbacks suppressed Jan 28 01:40:12.698632 kernel: audit: type=1101 audit(1769564412.690:834): pid=6488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:12.690000 audit[6488]: USER_ACCT pid=6488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:12.699139 containerd[1612]: time="2026-01-28T01:40:12.692758940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:40:12.699230 sshd[6488]: Accepted publickey for core from 10.0.0.1 port 48136 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:40:12.706150 sshd-session[6488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:40:12.702000 audit[6488]: CRED_ACQ pid=6488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:12.745393 kernel: audit: type=1103 audit(1769564412.702:835): pid=6488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:12.744914 systemd-logind[1594]: New session 16 of user core. Jan 28 01:40:12.766972 kernel: audit: type=1006 audit(1769564412.702:836): pid=6488 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 28 01:40:12.767102 kernel: audit: type=1300 audit(1769564412.702:836): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe748719c0 a2=3 a3=0 items=0 ppid=1 pid=6488 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:12.702000 audit[6488]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe748719c0 a2=3 a3=0 items=0 ppid=1 pid=6488 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:12.816579 containerd[1612]: time="2026-01-28T01:40:12.816212509Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:40:12.816828 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 01:40:12.826969 containerd[1612]: time="2026-01-28T01:40:12.820237510Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:40:12.826969 containerd[1612]: time="2026-01-28T01:40:12.820788840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 01:40:12.827107 kubelet[2938]: E0128 01:40:12.824785 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:40:12.827107 kubelet[2938]: E0128 01:40:12.824885 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:40:12.827107 kubelet[2938]: E0128 01:40:12.825041 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twgjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78b6655f44-dr84p_calico-system(477c43dc-f740-4bfd-b59c-255fe52c8673): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:40:12.827107 kubelet[2938]: E0128 01:40:12.826730 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" podUID="477c43dc-f740-4bfd-b59c-255fe52c8673" Jan 28 01:40:12.857088 kernel: audit: type=1327 audit(1769564412.702:836): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:40:12.702000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:40:12.858707 kubelet[2938]: E0128 01:40:12.852675 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:40:12.827000 audit[6488]: USER_START pid=6488 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:12.893451 kernel: audit: type=1105 audit(1769564412.827:837): pid=6488 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:12.860000 audit[6508]: CRED_ACQ pid=6508 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:12.928426 kernel: audit: type=1103 audit(1769564412.860:838): pid=6508 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:12.980735 systemd-networkd[1516]: cali37cc717c5f9: Link UP Jan 28 01:40:12.981528 systemd-networkd[1516]: cali37cc717c5f9: Gained carrier Jan 28 01:40:13.054155 kubelet[2938]: I0128 01:40:13.048964 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6cpzt" podStartSLOduration=303.04893813 podStartE2EDuration="5m3.04893813s" podCreationTimestamp="2026-01-28 01:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:40:12.939856521 +0000 UTC m=+303.701329406" watchObservedRunningTime="2026-01-28 01:40:13.04893813 +0000 UTC m=+303.810411025" Jan 28 01:40:13.093822 containerd[1612]: 2026-01-28 01:40:10.805 [INFO][6421] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7fcc88c58b--n2mcr-eth0 calico-apiserver-7fcc88c58b- calico-apiserver 38764aa9-f6ea-4a8f-ac0e-198fa6f97144 1232 0 2026-01-28 01:36:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fcc88c58b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7fcc88c58b-n2mcr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali37cc717c5f9 [] [] }} ContainerID="d66c2d466595bb3b043cce2070c0b13b2eba3c1aa373fc70f8bd1aa39268cc38" Namespace="calico-apiserver" Pod="calico-apiserver-7fcc88c58b-n2mcr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcc88c58b--n2mcr-" Jan 28 01:40:13.093822 containerd[1612]: 2026-01-28 01:40:10.817 [INFO][6421] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d66c2d466595bb3b043cce2070c0b13b2eba3c1aa373fc70f8bd1aa39268cc38" Namespace="calico-apiserver" Pod="calico-apiserver-7fcc88c58b-n2mcr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcc88c58b--n2mcr-eth0" Jan 28 01:40:13.093822 containerd[1612]: 2026-01-28 01:40:11.805 [INFO][6467] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d66c2d466595bb3b043cce2070c0b13b2eba3c1aa373fc70f8bd1aa39268cc38" HandleID="k8s-pod-network.d66c2d466595bb3b043cce2070c0b13b2eba3c1aa373fc70f8bd1aa39268cc38" Workload="localhost-k8s-calico--apiserver--7fcc88c58b--n2mcr-eth0" Jan 28 01:40:13.093822 containerd[1612]: 2026-01-28 01:40:11.842 [INFO][6467] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d66c2d466595bb3b043cce2070c0b13b2eba3c1aa373fc70f8bd1aa39268cc38" HandleID="k8s-pod-network.d66c2d466595bb3b043cce2070c0b13b2eba3c1aa373fc70f8bd1aa39268cc38" Workload="localhost-k8s-calico--apiserver--7fcc88c58b--n2mcr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034de50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7fcc88c58b-n2mcr", "timestamp":"2026-01-28 01:40:11.805179506 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:40:13.093822 containerd[1612]: 2026-01-28 01:40:11.864 [INFO][6467] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:40:13.093822 containerd[1612]: 2026-01-28 01:40:11.864 [INFO][6467] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:40:13.093822 containerd[1612]: 2026-01-28 01:40:11.864 [INFO][6467] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:40:13.093822 containerd[1612]: 2026-01-28 01:40:12.377 [INFO][6467] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d66c2d466595bb3b043cce2070c0b13b2eba3c1aa373fc70f8bd1aa39268cc38" host="localhost" Jan 28 01:40:13.093822 containerd[1612]: 2026-01-28 01:40:12.584 [INFO][6467] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:40:13.093822 containerd[1612]: 2026-01-28 01:40:12.634 [INFO][6467] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:40:13.093822 containerd[1612]: 2026-01-28 01:40:12.656 [INFO][6467] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:40:13.093822 containerd[1612]: 2026-01-28 01:40:12.664 [INFO][6467] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:40:13.093822 containerd[1612]: 2026-01-28 01:40:12.664 [INFO][6467] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d66c2d466595bb3b043cce2070c0b13b2eba3c1aa373fc70f8bd1aa39268cc38" host="localhost" Jan 28 01:40:13.093822 containerd[1612]: 2026-01-28 01:40:12.687 [INFO][6467] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d66c2d466595bb3b043cce2070c0b13b2eba3c1aa373fc70f8bd1aa39268cc38 Jan 28 01:40:13.093822 containerd[1612]: 2026-01-28 01:40:12.730 [INFO][6467] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d66c2d466595bb3b043cce2070c0b13b2eba3c1aa373fc70f8bd1aa39268cc38" host="localhost" Jan 28 01:40:13.093822 containerd[1612]: 2026-01-28 01:40:12.790 [INFO][6467] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d66c2d466595bb3b043cce2070c0b13b2eba3c1aa373fc70f8bd1aa39268cc38" host="localhost" Jan 28 01:40:13.093822 containerd[1612]: 2026-01-28 01:40:12.791 [INFO][6467] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d66c2d466595bb3b043cce2070c0b13b2eba3c1aa373fc70f8bd1aa39268cc38" host="localhost" Jan 28 01:40:13.093822 containerd[1612]: 2026-01-28 01:40:12.791 [INFO][6467] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:40:13.093822 containerd[1612]: 2026-01-28 01:40:12.791 [INFO][6467] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d66c2d466595bb3b043cce2070c0b13b2eba3c1aa373fc70f8bd1aa39268cc38" HandleID="k8s-pod-network.d66c2d466595bb3b043cce2070c0b13b2eba3c1aa373fc70f8bd1aa39268cc38" Workload="localhost-k8s-calico--apiserver--7fcc88c58b--n2mcr-eth0" Jan 28 01:40:13.094978 containerd[1612]: 2026-01-28 01:40:12.861 [INFO][6421] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d66c2d466595bb3b043cce2070c0b13b2eba3c1aa373fc70f8bd1aa39268cc38" Namespace="calico-apiserver" Pod="calico-apiserver-7fcc88c58b-n2mcr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcc88c58b--n2mcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7fcc88c58b--n2mcr-eth0", GenerateName:"calico-apiserver-7fcc88c58b-", Namespace:"calico-apiserver", SelfLink:"", UID:"38764aa9-f6ea-4a8f-ac0e-198fa6f97144", ResourceVersion:"1232", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 36, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fcc88c58b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7fcc88c58b-n2mcr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali37cc717c5f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:40:13.094978 containerd[1612]: 2026-01-28 01:40:12.861 [INFO][6421] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d66c2d466595bb3b043cce2070c0b13b2eba3c1aa373fc70f8bd1aa39268cc38" Namespace="calico-apiserver" Pod="calico-apiserver-7fcc88c58b-n2mcr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcc88c58b--n2mcr-eth0" Jan 28 01:40:13.094978 containerd[1612]: 2026-01-28 01:40:12.861 [INFO][6421] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37cc717c5f9 ContainerID="d66c2d466595bb3b043cce2070c0b13b2eba3c1aa373fc70f8bd1aa39268cc38" Namespace="calico-apiserver" Pod="calico-apiserver-7fcc88c58b-n2mcr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcc88c58b--n2mcr-eth0" Jan 28 01:40:13.094978 containerd[1612]: 2026-01-28 01:40:12.980 [INFO][6421] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d66c2d466595bb3b043cce2070c0b13b2eba3c1aa373fc70f8bd1aa39268cc38" Namespace="calico-apiserver" Pod="calico-apiserver-7fcc88c58b-n2mcr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcc88c58b--n2mcr-eth0" Jan 28 01:40:13.094978 containerd[1612]: 2026-01-28 01:40:12.981 [INFO][6421] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d66c2d466595bb3b043cce2070c0b13b2eba3c1aa373fc70f8bd1aa39268cc38" Namespace="calico-apiserver" Pod="calico-apiserver-7fcc88c58b-n2mcr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcc88c58b--n2mcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7fcc88c58b--n2mcr-eth0", GenerateName:"calico-apiserver-7fcc88c58b-", Namespace:"calico-apiserver", SelfLink:"", UID:"38764aa9-f6ea-4a8f-ac0e-198fa6f97144", ResourceVersion:"1232", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 36, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fcc88c58b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d66c2d466595bb3b043cce2070c0b13b2eba3c1aa373fc70f8bd1aa39268cc38", Pod:"calico-apiserver-7fcc88c58b-n2mcr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali37cc717c5f9", MAC:"46:cd:9b:a4:07:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:40:13.094978 containerd[1612]: 2026-01-28 01:40:13.043 [INFO][6421] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d66c2d466595bb3b043cce2070c0b13b2eba3c1aa373fc70f8bd1aa39268cc38" Namespace="calico-apiserver" Pod="calico-apiserver-7fcc88c58b-n2mcr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcc88c58b--n2mcr-eth0" Jan 28 01:40:13.108000 audit[6522]: NETFILTER_CFG table=filter:137 family=2 entries=14 op=nft_register_rule pid=6522 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:40:13.126377 kernel: audit: type=1325 audit(1769564413.108:839): table=filter:137 family=2 entries=14 op=nft_register_rule pid=6522 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:40:13.108000 audit[6522]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc669f80c0 a2=0 a3=7ffc669f80ac items=0 ppid=3042 pid=6522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:13.209429 kernel: audit: type=1300 audit(1769564413.108:839): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc669f80c0 a2=0 a3=7ffc669f80ac items=0 ppid=3042 pid=6522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:13.209531 kernel: audit: type=1327 audit(1769564413.108:839): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:40:13.108000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:40:13.136000 audit[6522]: NETFILTER_CFG table=nat:138 family=2 entries=44 op=nft_register_rule pid=6522 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:40:13.136000 audit[6522]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc669f80c0 a2=0 a3=7ffc669f80ac items=0 ppid=3042 pid=6522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:13.136000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:40:13.236572 containerd[1612]: time="2026-01-28T01:40:13.236460552Z" level=info msg="connecting to shim d66c2d466595bb3b043cce2070c0b13b2eba3c1aa373fc70f8bd1aa39268cc38" address="unix:///run/containerd/s/88f14460aaebe0b90649fd05fe1bfc5629f395d611572fe9f02595e869ee7331" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:40:13.316000 audit[6558]: NETFILTER_CFG table=filter:139 family=2 entries=57 op=nft_register_chain pid=6558 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:40:13.316000 audit[6558]: SYSCALL arch=c000003e syscall=46 success=yes exit=27812 a0=3 a1=7ffe47c86990 a2=0 a3=7ffe47c8697c items=0 ppid=5767 pid=6558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:13.316000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:40:13.439000 audit[6564]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=6564 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:40:13.439000 audit[6564]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc7f23b790 a2=0 a3=7ffc7f23b77c items=0 ppid=3042 pid=6564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:13.439000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:40:13.460496 sshd[6508]: Connection closed by 10.0.0.1 port 48136 Jan 28 01:40:13.465965 sshd-session[6488]: pam_unix(sshd:session): session closed for user core Jan 28 01:40:13.471000 audit[6488]: USER_END pid=6488 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:13.471000 audit[6488]: CRED_DISP pid=6488 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:13.477896 systemd[1]: Started cri-containerd-d66c2d466595bb3b043cce2070c0b13b2eba3c1aa373fc70f8bd1aa39268cc38.scope - libcontainer container d66c2d466595bb3b043cce2070c0b13b2eba3c1aa373fc70f8bd1aa39268cc38. Jan 28 01:40:13.511000 audit[6564]: NETFILTER_CFG table=nat:141 family=2 entries=56 op=nft_register_chain pid=6564 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:40:13.511000 audit[6564]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc7f23b790 a2=0 a3=7ffc7f23b77c items=0 ppid=3042 pid=6564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:13.511000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:40:13.518478 systemd[1]: sshd@15-10.0.0.88:22-10.0.0.1:48136.service: Deactivated successfully. Jan 28 01:40:13.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.88:22-10.0.0.1:48136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:40:13.535922 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 01:40:13.546007 systemd-logind[1594]: Session 16 logged out. Waiting for processes to exit. Jan 28 01:40:13.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.88:22-10.0.0.1:56870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:40:13.571790 systemd[1]: Started sshd@16-10.0.0.88:22-10.0.0.1:56870.service - OpenSSH per-connection server daemon (10.0.0.1:56870). Jan 28 01:40:13.583000 audit: BPF prog-id=263 op=LOAD Jan 28 01:40:13.596000 audit: BPF prog-id=264 op=LOAD Jan 28 01:40:13.596000 audit[6551]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=6539 pid=6551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:13.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436366332643436363539356262336230343363636532303730633062 Jan 28 01:40:13.601000 audit: BPF prog-id=264 op=UNLOAD Jan 28 01:40:13.601000 audit[6551]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6539 pid=6551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:13.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436366332643436363539356262336230343363636532303730633062 Jan 28 01:40:13.604000 audit: BPF prog-id=265 op=LOAD Jan 28 01:40:13.604000 audit[6551]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=6539 pid=6551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:13.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436366332643436363539356262336230343363636532303730633062 Jan 28 01:40:13.600788 systemd-logind[1594]: Removed session 16. Jan 28 01:40:13.611000 audit: BPF prog-id=266 op=LOAD Jan 28 01:40:13.611000 audit[6551]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=6539 pid=6551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:13.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436366332643436363539356262336230343363636532303730633062 Jan 28 01:40:13.621000 audit: BPF prog-id=266 op=UNLOAD Jan 28 01:40:13.621000 audit[6551]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6539 pid=6551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:13.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436366332643436363539356262336230343363636532303730633062 Jan 28 01:40:13.621000 audit: BPF prog-id=265 op=UNLOAD Jan 28 01:40:13.621000 audit[6551]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6539 pid=6551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:13.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436366332643436363539356262336230343363636532303730633062 Jan 28 01:40:13.621000 audit: BPF prog-id=267 op=LOAD Jan 28 01:40:13.621000 audit[6551]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=6539 pid=6551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:13.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436366332643436363539356262336230343363636532303730633062 Jan 28 01:40:13.630767 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:40:13.889644 kubelet[2938]: E0128 01:40:13.886071 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:40:13.895018 kubelet[2938]: E0128 01:40:13.894972 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" podUID="477c43dc-f740-4bfd-b59c-255fe52c8673" Jan 28 01:40:13.956000 audit[6577]: USER_ACCT pid=6577 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:13.957773 sshd[6577]: Accepted publickey for core from 10.0.0.1 port 56870 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:40:13.960000 audit[6577]: CRED_ACQ pid=6577 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:13.960000 audit[6577]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd95d0f170 a2=3 a3=0 items=0 ppid=1 pid=6577 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:13.960000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:40:13.968910 sshd-session[6577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:40:14.030471 systemd-logind[1594]: New session 17 of user core. Jan 28 01:40:14.043479 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 01:40:14.095000 audit[6577]: USER_START pid=6577 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:14.100000 audit[6587]: CRED_ACQ pid=6587 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:14.126529 containerd[1612]: time="2026-01-28T01:40:14.126421539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcc88c58b-n2mcr,Uid:38764aa9-f6ea-4a8f-ac0e-198fa6f97144,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d66c2d466595bb3b043cce2070c0b13b2eba3c1aa373fc70f8bd1aa39268cc38\"" Jan 28 01:40:14.163496 containerd[1612]: time="2026-01-28T01:40:14.162889540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:40:14.271035 containerd[1612]: time="2026-01-28T01:40:14.270982003Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:40:14.285854 containerd[1612]: time="2026-01-28T01:40:14.285726849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:40:14.286226 containerd[1612]: time="2026-01-28T01:40:14.286174554Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:40:14.292429 kubelet[2938]: E0128 01:40:14.292097 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:40:14.300556 kubelet[2938]: E0128 01:40:14.298865 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:40:14.300556 kubelet[2938]: E0128 01:40:14.299242 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ckwxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fcc88c58b-n2mcr_calico-apiserver(38764aa9-f6ea-4a8f-ac0e-198fa6f97144): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:40:14.301728 kubelet[2938]: E0128 01:40:14.301224 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" podUID="38764aa9-f6ea-4a8f-ac0e-198fa6f97144" Jan 28 01:40:14.551690 systemd-networkd[1516]: cali37cc717c5f9: Gained IPv6LL Jan 28 01:40:14.914640 kubelet[2938]: E0128 01:40:14.914469 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:40:14.925864 kubelet[2938]: E0128 01:40:14.925749 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" podUID="38764aa9-f6ea-4a8f-ac0e-198fa6f97144" Jan 28 01:40:15.060582 sshd[6587]: Connection closed by 10.0.0.1 port 56870 Jan 28 01:40:15.061594 sshd-session[6577]: pam_unix(sshd:session): session closed for user core Jan 28 01:40:15.074000 audit[6577]: USER_END pid=6577 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:15.074000 audit[6577]: CRED_DISP pid=6577 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:15.106224 systemd[1]: sshd@16-10.0.0.88:22-10.0.0.1:56870.service: Deactivated successfully. Jan 28 01:40:15.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.88:22-10.0.0.1:56870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:40:15.125649 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 01:40:15.137669 systemd-logind[1594]: Session 17 logged out. Waiting for processes to exit. Jan 28 01:40:15.158694 systemd[1]: Started sshd@17-10.0.0.88:22-10.0.0.1:56884.service - OpenSSH per-connection server daemon (10.0.0.1:56884). Jan 28 01:40:15.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.88:22-10.0.0.1:56884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:40:15.173731 systemd-logind[1594]: Removed session 17. Jan 28 01:40:15.203000 audit[6608]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=6608 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:40:15.203000 audit[6608]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe73413910 a2=0 a3=7ffe734138fc items=0 ppid=3042 pid=6608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:15.203000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:40:15.220000 audit[6608]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=6608 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:40:15.220000 audit[6608]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe73413910 a2=0 a3=7ffe734138fc items=0 ppid=3042 pid=6608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:15.220000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:40:15.428000 audit[6607]: USER_ACCT pid=6607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:15.428729 sshd[6607]: Accepted publickey for core from 10.0.0.1 port 56884 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:40:15.428000 audit[6607]: CRED_ACQ pid=6607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:15.428000 audit[6607]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdabffa4c0 a2=3 a3=0 items=0 ppid=1 pid=6607 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:15.428000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:40:15.440554 sshd-session[6607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:40:15.500949 systemd-logind[1594]: New session 18 of user core. Jan 28 01:40:15.512711 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 01:40:15.536000 audit[6607]: USER_START pid=6607 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:15.546000 audit[6612]: CRED_ACQ pid=6612 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:15.940932 kubelet[2938]: E0128 01:40:15.936110 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" podUID="38764aa9-f6ea-4a8f-ac0e-198fa6f97144" Jan 28 01:40:16.127377 containerd[1612]: time="2026-01-28T01:40:16.127095042Z" level=info msg="container event discarded" container=5d69cc9e2dfd16078a5848bcfcf1bb3157b227a9236caf264720f83f2210553f type=CONTAINER_CREATED_EVENT Jan 28 01:40:16.128166 containerd[1612]: time="2026-01-28T01:40:16.128140904Z" level=info msg="container event discarded" container=5d69cc9e2dfd16078a5848bcfcf1bb3157b227a9236caf264720f83f2210553f type=CONTAINER_STARTED_EVENT Jan 28 01:40:16.184930 sshd[6612]: Connection closed by 10.0.0.1 port 56884 Jan 28 01:40:16.186634 sshd-session[6607]: pam_unix(sshd:session): session closed for user core Jan 28 01:40:16.188000 audit[6607]: USER_END pid=6607 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:16.188000 audit[6607]: CRED_DISP pid=6607 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:16.203979 systemd[1]: sshd@17-10.0.0.88:22-10.0.0.1:56884.service: Deactivated successfully. Jan 28 01:40:16.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.88:22-10.0.0.1:56884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:40:16.217476 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 01:40:16.226712 systemd-logind[1594]: Session 18 logged out. Waiting for processes to exit. Jan 28 01:40:16.237449 systemd-logind[1594]: Removed session 18. Jan 28 01:40:16.723016 containerd[1612]: time="2026-01-28T01:40:16.722873274Z" level=info msg="container event discarded" container=888236a94dca2c8c926aa09e38eab55768b815c90473277ee9d363333562b814 type=CONTAINER_CREATED_EVENT Jan 28 01:40:18.313408 containerd[1612]: time="2026-01-28T01:40:18.305497570Z" level=info msg="container event discarded" container=888236a94dca2c8c926aa09e38eab55768b815c90473277ee9d363333562b814 type=CONTAINER_STARTED_EVENT Jan 28 01:40:20.419094 containerd[1612]: time="2026-01-28T01:40:20.414778528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:40:20.595894 containerd[1612]: time="2026-01-28T01:40:20.595610926Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:40:20.622665 containerd[1612]: time="2026-01-28T01:40:20.618759415Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:40:20.622665 containerd[1612]: time="2026-01-28T01:40:20.618884479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 01:40:20.622958 kubelet[2938]: E0128 01:40:20.620722 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:40:20.622958 kubelet[2938]: E0128 01:40:20.620788 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:40:20.622958 kubelet[2938]: E0128 01:40:20.621033 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:49231b61cab941e2b913be4ad476f1ab,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ppmfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757b4d7df4-d9b2x_calico-system(a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:40:20.631494 containerd[1612]: time="2026-01-28T01:40:20.630798469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:40:20.807962 containerd[1612]: time="2026-01-28T01:40:20.807807532Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:40:20.824943 containerd[1612]: time="2026-01-28T01:40:20.824731837Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:40:20.834749 containerd[1612]: time="2026-01-28T01:40:20.825248311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 01:40:20.834886 kubelet[2938]: E0128 01:40:20.829880 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:40:20.834886 kubelet[2938]: E0128 01:40:20.829946 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:40:20.837606 containerd[1612]: time="2026-01-28T01:40:20.835963275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:40:20.853686 kubelet[2938]: E0128 01:40:20.830215 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwnhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4245v_calico-system(a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:40:20.976731 containerd[1612]: time="2026-01-28T01:40:20.976672170Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:40:20.990607 containerd[1612]: time="2026-01-28T01:40:20.990531794Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:40:20.993504 containerd[1612]: time="2026-01-28T01:40:20.990549657Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 01:40:20.996757 kubelet[2938]: E0128 01:40:20.996645 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:40:20.996970 kubelet[2938]: E0128 01:40:20.996942 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:40:21.000148 kubelet[2938]: E0128 01:40:21.000059 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppmfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757b4d7df4-d9b2x_calico-system(a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:40:21.006106 containerd[1612]: time="2026-01-28T01:40:21.002892785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:40:21.006210 kubelet[2938]: E0128 01:40:21.005931 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-757b4d7df4-d9b2x" podUID="a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7" Jan 28 01:40:21.142842 containerd[1612]: time="2026-01-28T01:40:21.134829702Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:40:21.163880 containerd[1612]: time="2026-01-28T01:40:21.161671500Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:40:21.163880 containerd[1612]: time="2026-01-28T01:40:21.161801413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 01:40:21.164088 kubelet[2938]: E0128 01:40:21.163017 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:40:21.164088 kubelet[2938]: E0128 01:40:21.163085 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:40:21.164088 kubelet[2938]: E0128 01:40:21.163523 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwnhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4245v_calico-system(a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:40:21.169540 kubelet[2938]: E0128 01:40:21.168840 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:40:21.341640 kernel: kauditd_printk_skb: 65 callbacks suppressed Jan 28 01:40:21.341786 kernel: audit: type=1130 audit(1769564421.288:875): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.88:22-10.0.0.1:56892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:40:21.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.88:22-10.0.0.1:56892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:40:21.289011 systemd[1]: Started sshd@18-10.0.0.88:22-10.0.0.1:56892.service - OpenSSH per-connection server daemon (10.0.0.1:56892). Jan 28 01:40:21.380708 containerd[1612]: time="2026-01-28T01:40:21.377829329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:40:21.596063 containerd[1612]: time="2026-01-28T01:40:21.590004042Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:40:21.612220 containerd[1612]: time="2026-01-28T01:40:21.611603119Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:40:21.613249 containerd[1612]: time="2026-01-28T01:40:21.613103970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 01:40:21.614829 kubelet[2938]: E0128 01:40:21.614546 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:40:21.614829 kubelet[2938]: E0128 01:40:21.614609 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:40:21.614829 kubelet[2938]: E0128 01:40:21.614761 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqn7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-s28bh_calico-system(55f83d8e-e337-4a1b-9dba-8df114668f11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:40:21.616214 kubelet[2938]: E0128 01:40:21.616039 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s28bh" podUID="55f83d8e-e337-4a1b-9dba-8df114668f11" Jan 28 01:40:21.771939 sshd[6635]: Accepted publickey for core from 10.0.0.1 port 56892 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:40:21.769000 audit[6635]: USER_ACCT pid=6635 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:21.804963 sshd-session[6635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:40:21.867158 kernel: audit: type=1101 audit(1769564421.769:876): pid=6635 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:21.867529 kernel: audit: type=1103 audit(1769564421.782:877): pid=6635 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:21.782000 audit[6635]: CRED_ACQ pid=6635 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:21.889756 systemd-logind[1594]: New session 19 of user core. Jan 28 01:40:21.977589 kernel: audit: type=1006 audit(1769564421.782:878): pid=6635 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 28 01:40:21.980401 kernel: audit: type=1300 audit(1769564421.782:878): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb7621c40 a2=3 a3=0 items=0 ppid=1 pid=6635 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:21.782000 audit[6635]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb7621c40 a2=3 a3=0 items=0 ppid=1 pid=6635 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:22.020558 kernel: audit: type=1327 audit(1769564421.782:878): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:40:21.782000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:40:22.046247 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 01:40:22.076000 audit[6635]: USER_START pid=6635 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:22.139863 kernel: audit: type=1105 audit(1769564422.076:879): pid=6635 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:22.090000 audit[6639]: CRED_ACQ pid=6639 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:22.204432 kernel: audit: type=1103 audit(1769564422.090:880): pid=6639 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:22.777094 sshd[6639]: Connection closed by 10.0.0.1 port 56892 Jan 28 01:40:22.783103 sshd-session[6635]: pam_unix(sshd:session): session closed for user core Jan 28 01:40:22.789000 audit[6635]: USER_END pid=6635 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:22.810098 systemd-logind[1594]: Session 19 logged out. Waiting for processes to exit. Jan 28 01:40:22.814206 systemd[1]: sshd@18-10.0.0.88:22-10.0.0.1:56892.service: Deactivated successfully. Jan 28 01:40:22.841119 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 01:40:22.855519 kernel: audit: type=1106 audit(1769564422.789:881): pid=6635 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:22.855650 kernel: audit: type=1104 audit(1769564422.789:882): pid=6635 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:22.789000 audit[6635]: CRED_DISP pid=6635 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:22.864932 systemd-logind[1594]: Removed session 19. Jan 28 01:40:22.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.88:22-10.0.0.1:56892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:40:23.407443 containerd[1612]: time="2026-01-28T01:40:23.389701883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:40:23.566411 containerd[1612]: time="2026-01-28T01:40:23.564217855Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:40:23.577129 containerd[1612]: time="2026-01-28T01:40:23.576966976Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:40:23.577129 containerd[1612]: time="2026-01-28T01:40:23.577083082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:40:23.579108 kubelet[2938]: E0128 01:40:23.578152 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:40:23.579108 kubelet[2938]: E0128 01:40:23.578220 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:40:23.579108 kubelet[2938]: E0128 01:40:23.578573 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdbrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fcc88c58b-57jrt_calico-apiserver(df1949f7-cac3-4cf6-8c60-f8d963a49163): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:40:23.589130 kubelet[2938]: E0128 01:40:23.583791 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" podUID="df1949f7-cac3-4cf6-8c60-f8d963a49163" Jan 28 01:40:24.382724 containerd[1612]: time="2026-01-28T01:40:24.381865432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:40:24.490679 containerd[1612]: time="2026-01-28T01:40:24.486068212Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:40:24.499221 containerd[1612]: time="2026-01-28T01:40:24.498918212Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:40:24.499221 containerd[1612]: time="2026-01-28T01:40:24.498979986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 01:40:24.506192 kubelet[2938]: E0128 01:40:24.502618 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:40:24.506192 kubelet[2938]: E0128 01:40:24.502803 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:40:24.506192 kubelet[2938]: E0128 01:40:24.503624 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twgjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78b6655f44-dr84p_calico-system(477c43dc-f740-4bfd-b59c-255fe52c8673): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:40:24.507608 kubelet[2938]: E0128 01:40:24.507553 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" podUID="477c43dc-f740-4bfd-b59c-255fe52c8673" Jan 28 01:40:25.672100 containerd[1612]: time="2026-01-28T01:40:25.670241447Z" level=info msg="container event discarded" container=9620265baadedb1921ed90b2995e49d9956007ca01ce33021107b86d4fc2f5ba type=CONTAINER_CREATED_EVENT Jan 28 01:40:25.672100 containerd[1612]: time="2026-01-28T01:40:25.670503407Z" level=info msg="container event discarded" container=9620265baadedb1921ed90b2995e49d9956007ca01ce33021107b86d4fc2f5ba type=CONTAINER_STARTED_EVENT Jan 28 01:40:27.877584 systemd[1]: Started sshd@19-10.0.0.88:22-10.0.0.1:50988.service - OpenSSH per-connection server daemon (10.0.0.1:50988). Jan 28 01:40:27.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.88:22-10.0.0.1:50988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:40:27.916771 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:40:27.916929 kernel: audit: type=1130 audit(1769564427.877:884): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.88:22-10.0.0.1:50988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:40:28.232000 audit[6657]: USER_ACCT pid=6657 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:28.261483 sshd[6657]: Accepted publickey for core from 10.0.0.1 port 50988 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:40:28.388242 kernel: audit: type=1101 audit(1769564428.232:885): pid=6657 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:28.415000 audit[6657]: CRED_ACQ pid=6657 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:28.423100 sshd-session[6657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:40:28.497194 kernel: audit: type=1103 audit(1769564428.415:886): pid=6657 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:28.497464 kernel: audit: type=1006 audit(1769564428.415:887): pid=6657 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 28 01:40:28.495852 systemd-logind[1594]: New session 20 of user core. Jan 28 01:40:28.415000 audit[6657]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc9431ae0 a2=3 a3=0 items=0 ppid=1 pid=6657 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:28.514967 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 01:40:28.543249 containerd[1612]: time="2026-01-28T01:40:28.507867234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:40:28.578010 kernel: audit: type=1300 audit(1769564428.415:887): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc9431ae0 a2=3 a3=0 items=0 ppid=1 pid=6657 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:28.578053 kernel: audit: type=1327 audit(1769564428.415:887): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:40:28.415000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:40:28.578000 audit[6657]: USER_START pid=6657 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:28.633519 kernel: audit: type=1105 audit(1769564428.578:888): pid=6657 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:28.698434 kernel: audit: type=1103 audit(1769564428.585:889): pid=6661 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:28.585000 audit[6661]: CRED_ACQ pid=6661 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:28.765250 containerd[1612]: time="2026-01-28T01:40:28.764549174Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:40:28.778663 containerd[1612]: time="2026-01-28T01:40:28.777555014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:40:28.778663 containerd[1612]: time="2026-01-28T01:40:28.777980207Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:40:28.779067 kubelet[2938]: E0128 01:40:28.778584 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:40:28.779067 kubelet[2938]: E0128 01:40:28.778650 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:40:28.779067 kubelet[2938]: E0128 01:40:28.778810 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ckwxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fcc88c58b-n2mcr_calico-apiserver(38764aa9-f6ea-4a8f-ac0e-198fa6f97144): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:40:28.781030 kubelet[2938]: E0128 01:40:28.780822 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" podUID="38764aa9-f6ea-4a8f-ac0e-198fa6f97144" Jan 28 01:40:29.570710 sshd[6661]: Connection closed by 10.0.0.1 port 50988 Jan 28 01:40:29.568821 sshd-session[6657]: pam_unix(sshd:session): session closed for user core Jan 28 01:40:29.573000 audit[6657]: USER_END pid=6657 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:29.590644 systemd[1]: sshd@19-10.0.0.88:22-10.0.0.1:50988.service: Deactivated successfully. Jan 28 01:40:29.603789 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 01:40:29.639048 kernel: audit: type=1106 audit(1769564429.573:890): pid=6657 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:29.580000 audit[6657]: CRED_DISP pid=6657 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:29.653714 systemd-logind[1594]: Session 20 logged out. Waiting for processes to exit. Jan 28 01:40:29.662815 systemd-logind[1594]: Removed session 20. Jan 28 01:40:29.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.88:22-10.0.0.1:50988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:40:29.710831 kernel: audit: type=1104 audit(1769564429.580:891): pid=6657 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:30.058108 kubelet[2938]: E0128 01:40:30.058069 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:40:34.545973 kubelet[2938]: E0128 01:40:34.515225 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:40:34.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.88:22-10.0.0.1:37886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:40:34.691226 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:40:34.692926 kernel: audit: type=1130 audit(1769564434.681:893): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.88:22-10.0.0.1:37886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:40:34.682641 systemd[1]: Started sshd@20-10.0.0.88:22-10.0.0.1:37886.service - OpenSSH per-connection server daemon (10.0.0.1:37886). Jan 28 01:40:35.225000 audit[6708]: USER_ACCT pid=6708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:35.313992 kernel: audit: type=1101 audit(1769564435.225:894): pid=6708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:35.314494 kernel: audit: type=1103 audit(1769564435.294:895): pid=6708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:35.294000 audit[6708]: CRED_ACQ pid=6708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:35.309085 sshd-session[6708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:40:35.321242 sshd[6708]: Accepted publickey for core from 10.0.0.1 port 37886 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:40:35.386001 kernel: audit: type=1006 audit(1769564435.294:896): pid=6708 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 28 01:40:35.378557 systemd-logind[1594]: New session 21 of user core. Jan 28 01:40:35.294000 audit[6708]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce02607e0 a2=3 a3=0 items=0 ppid=1 pid=6708 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:35.505918 kernel: audit: type=1300 audit(1769564435.294:896): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce02607e0 a2=3 a3=0 items=0 ppid=1 pid=6708 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:35.508641 kernel: audit: type=1327 audit(1769564435.294:896): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:40:35.294000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:40:35.530644 kubelet[2938]: E0128 01:40:35.530521 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-757b4d7df4-d9b2x" podUID="a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7" Jan 28 01:40:35.570200 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 01:40:35.665000 audit[6708]: USER_START pid=6708 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:35.740244 kernel: audit: type=1105 audit(1769564435.665:897): pid=6708 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:35.740833 kernel: audit: type=1103 audit(1769564435.709:898): pid=6712 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:35.709000 audit[6712]: CRED_ACQ pid=6712 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:36.310615 sshd[6712]: Connection closed by 10.0.0.1 port 37886 Jan 28 01:40:36.309995 sshd-session[6708]: pam_unix(sshd:session): session closed for user core Jan 28 01:40:36.314000 audit[6708]: USER_END pid=6708 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:36.363466 kernel: audit: type=1106 audit(1769564436.314:899): pid=6708 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:36.363603 kernel: audit: type=1104 audit(1769564436.315:900): pid=6708 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:36.315000 audit[6708]: CRED_DISP pid=6708 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:36.377469 systemd[1]: sshd@20-10.0.0.88:22-10.0.0.1:37886.service: Deactivated successfully. Jan 28 01:40:36.391929 kubelet[2938]: E0128 01:40:36.391888 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" podUID="df1949f7-cac3-4cf6-8c60-f8d963a49163" Jan 28 01:40:36.394114 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 01:40:36.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.88:22-10.0.0.1:37886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:40:36.409684 kubelet[2938]: E0128 01:40:36.394875 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s28bh" podUID="55f83d8e-e337-4a1b-9dba-8df114668f11" Jan 28 01:40:36.409684 kubelet[2938]: E0128 01:40:36.394939 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" podUID="477c43dc-f740-4bfd-b59c-255fe52c8673" Jan 28 01:40:36.400800 systemd-logind[1594]: Session 21 logged out. Waiting for processes to exit. Jan 28 01:40:36.403768 systemd-logind[1594]: Removed session 21. Jan 28 01:40:49.767561 kubelet[2938]: E0128 01:40:49.742553 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:40:49.767561 kubelet[2938]: E0128 01:40:49.765895 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:40:49.912938 systemd[1]: Started sshd@21-10.0.0.88:22-10.0.0.1:37896.service - OpenSSH per-connection server daemon (10.0.0.1:37896). Jan 28 01:40:49.931389 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:40:49.931718 kernel: audit: type=1130 audit(1769564449.913:902): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.88:22-10.0.0.1:37896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:40:49.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.88:22-10.0.0.1:37896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:40:49.972398 containerd[1612]: time="2026-01-28T01:40:49.971747510Z" level=info msg="container event discarded" container=e1608167b8d5a77019c1c3152bbf1cdb1e6b7e70266211189ee0553a74313f3c type=CONTAINER_CREATED_EVENT Jan 28 01:40:49.995640 systemd[1]: cri-containerd-0a8117195d9fae8d7be32af18157ae5c0e157949452e454c995b8da1f24b42cb.scope: Deactivated successfully. Jan 28 01:40:50.004692 systemd[1]: cri-containerd-0a8117195d9fae8d7be32af18157ae5c0e157949452e454c995b8da1f24b42cb.scope: Consumed 17.232s CPU time, 58.5M memory peak, 3.3M read from disk. Jan 28 01:40:50.054368 kernel: audit: type=1334 audit(1769564450.033:903): prog-id=158 op=UNLOAD Jan 28 01:40:50.033000 audit: BPF prog-id=158 op=UNLOAD Jan 28 01:40:50.033000 audit: BPF prog-id=162 op=UNLOAD Jan 28 01:40:50.064634 kernel: audit: type=1334 audit(1769564450.033:904): prog-id=162 op=UNLOAD Jan 28 01:40:50.067737 systemd[1]: cri-containerd-e1608167b8d5a77019c1c3152bbf1cdb1e6b7e70266211189ee0553a74313f3c.scope: Deactivated successfully. Jan 28 01:40:50.068400 systemd[1]: cri-containerd-e1608167b8d5a77019c1c3152bbf1cdb1e6b7e70266211189ee0553a74313f3c.scope: Consumed 30.486s CPU time, 87.2M memory peak, 8.4M read from disk. Jan 28 01:40:50.078000 audit: BPF prog-id=146 op=UNLOAD Jan 28 01:40:50.078000 audit: BPF prog-id=150 op=UNLOAD Jan 28 01:40:50.107368 kernel: audit: type=1334 audit(1769564450.078:905): prog-id=146 op=UNLOAD Jan 28 01:40:50.107560 kernel: audit: type=1334 audit(1769564450.078:906): prog-id=150 op=UNLOAD Jan 28 01:40:50.139082 containerd[1612]: time="2026-01-28T01:40:50.138953502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:40:57.977872 systemd[1]: cri-containerd-714839389731feaa100ba85d309cb53c753aa71e6cfb737ef244adf739af06d5.scope: Deactivated successfully. Jan 28 01:40:57.978950 systemd[1]: cri-containerd-714839389731feaa100ba85d309cb53c753aa71e6cfb737ef244adf739af06d5.scope: Consumed 9.975s CPU time, 24.6M memory peak, 2.1M read from disk. Jan 28 01:40:58.019746 kernel: audit: type=1334 audit(1769564457.990:907): prog-id=153 op=UNLOAD Jan 28 01:40:57.990000 audit: BPF prog-id=153 op=UNLOAD Jan 28 01:40:57.990000 audit: BPF prog-id=157 op=UNLOAD Jan 28 01:40:58.062240 kernel: audit: type=1334 audit(1769564457.990:908): prog-id=157 op=UNLOAD Jan 28 01:40:58.080600 containerd[1612]: time="2026-01-28T01:40:55.685173190Z" level=info msg="container event discarded" container=e1608167b8d5a77019c1c3152bbf1cdb1e6b7e70266211189ee0553a74313f3c type=CONTAINER_STARTED_EVENT Jan 28 01:40:58.083547 containerd[1612]: time="2026-01-28T01:40:57.690777057Z" level=info msg="received container exit event container_id:\"0a8117195d9fae8d7be32af18157ae5c0e157949452e454c995b8da1f24b42cb\" id:\"0a8117195d9fae8d7be32af18157ae5c0e157949452e454c995b8da1f24b42cb\" pid:3431 exit_status:1 exited_at:{seconds:1769564452 nanos:468668170}" Jan 28 01:40:58.102161 containerd[1612]: time="2026-01-28T01:40:57.577027647Z" level=error msg="post event" error="context deadline exceeded" Jan 28 01:40:58.102161 containerd[1612]: time="2026-01-28T01:40:58.076560724Z" level=error msg="ttrpc: received message on inactive stream" stream=19 Jan 28 01:40:58.104986 containerd[1612]: time="2026-01-28T01:40:58.104877465Z" level=info msg="received container exit event container_id:\"714839389731feaa100ba85d309cb53c753aa71e6cfb737ef244adf739af06d5\" id:\"714839389731feaa100ba85d309cb53c753aa71e6cfb737ef244adf739af06d5\" pid:3416 exit_status:1 exited_at:{seconds:1769564458 nanos:21948323}" Jan 28 01:40:58.214384 kernel: audit: type=1101 audit(1769564458.157:909): pid=6728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:58.157000 audit[6728]: USER_ACCT pid=6728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:58.214819 sshd[6728]: Accepted publickey for core from 10.0.0.1 port 37896 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:40:58.215000 audit[6728]: CRED_ACQ pid=6728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:58.261798 kernel: audit: type=1103 audit(1769564458.215:910): pid=6728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:58.265434 sshd-session[6728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:40:58.329649 kernel: audit: type=1006 audit(1769564458.215:911): pid=6728 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 28 01:40:58.329782 kernel: audit: type=1300 audit(1769564458.215:911): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc40db1770 a2=3 a3=0 items=0 ppid=1 pid=6728 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:58.215000 audit[6728]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc40db1770 a2=3 a3=0 items=0 ppid=1 pid=6728 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:40:58.374803 containerd[1612]: time="2026-01-28T01:40:58.373014601Z" level=info msg="received container exit event container_id:\"e1608167b8d5a77019c1c3152bbf1cdb1e6b7e70266211189ee0553a74313f3c\" id:\"e1608167b8d5a77019c1c3152bbf1cdb1e6b7e70266211189ee0553a74313f3c\" pid:3270 exit_status:1 exited_at:{seconds:1769564458 nanos:265548170}" Jan 28 01:40:58.215000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:40:58.397609 systemd-logind[1594]: New session 22 of user core. Jan 28 01:40:58.438890 kernel: audit: type=1327 audit(1769564458.215:911): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:40:58.459212 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 01:40:58.540000 audit[6728]: USER_START pid=6728 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:58.636383 kubelet[2938]: E0128 01:40:58.625495 2938 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.107s" Jan 28 01:40:58.715409 kernel: audit: type=1105 audit(1769564458.540:912): pid=6728 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:58.717714 kernel: audit: type=1103 audit(1769564458.551:913): pid=6736 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:58.551000 audit[6736]: CRED_ACQ pid=6736 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:58.937111 kubelet[2938]: E0128 01:40:58.890806 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:40:59.045013 containerd[1612]: time="2026-01-28T01:40:59.043618190Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:40:59.067838 containerd[1612]: time="2026-01-28T01:40:59.064893330Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:40:59.067838 containerd[1612]: time="2026-01-28T01:40:59.065118190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:40:59.072528 kubelet[2938]: E0128 01:40:59.068246 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:40:59.072528 kubelet[2938]: E0128 01:40:59.068514 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:40:59.080369 containerd[1612]: time="2026-01-28T01:40:59.077548116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:40:59.115746 kubelet[2938]: E0128 01:40:59.073658 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ckwxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fcc88c58b-n2mcr_calico-apiserver(38764aa9-f6ea-4a8f-ac0e-198fa6f97144): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:40:59.119745 kubelet[2938]: E0128 01:40:59.119614 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" podUID="38764aa9-f6ea-4a8f-ac0e-198fa6f97144" Jan 28 01:40:59.337211 containerd[1612]: time="2026-01-28T01:40:59.328431785Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:40:59.337211 containerd[1612]: time="2026-01-28T01:40:59.333762693Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:40:59.337211 containerd[1612]: time="2026-01-28T01:40:59.334686417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 01:40:59.345210 kubelet[2938]: E0128 01:40:59.338937 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:40:59.345210 kubelet[2938]: E0128 01:40:59.339005 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:40:59.353102 kubelet[2938]: E0128 01:40:59.351672 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwnhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4245v_calico-system(a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:40:59.370015 sshd[6736]: Connection closed by 10.0.0.1 port 37896 Jan 28 01:40:59.365764 sshd-session[6728]: pam_unix(sshd:session): session closed for user core Jan 28 01:40:59.372000 audit[6728]: USER_END pid=6728 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:59.406609 kernel: audit: type=1106 audit(1769564459.372:914): pid=6728 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:59.406665 containerd[1612]: time="2026-01-28T01:40:59.375857105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:40:59.424247 systemd[1]: sshd@21-10.0.0.88:22-10.0.0.1:37896.service: Deactivated successfully. Jan 28 01:40:59.372000 audit[6728]: CRED_DISP pid=6728 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:40:59.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.88:22-10.0.0.1:37896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:40:59.454235 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 01:40:59.458771 systemd-logind[1594]: Session 22 logged out. Waiting for processes to exit. Jan 28 01:40:59.506743 systemd-logind[1594]: Removed session 22. Jan 28 01:40:59.530354 containerd[1612]: time="2026-01-28T01:40:59.529215432Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:40:59.533978 containerd[1612]: time="2026-01-28T01:40:59.531710268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:40:59.533978 containerd[1612]: time="2026-01-28T01:40:59.531810745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:40:59.535855 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-714839389731feaa100ba85d309cb53c753aa71e6cfb737ef244adf739af06d5-rootfs.mount: Deactivated successfully. Jan 28 01:40:59.538708 kubelet[2938]: E0128 01:40:59.538416 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:40:59.538708 kubelet[2938]: E0128 01:40:59.538565 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:40:59.546228 kubelet[2938]: E0128 01:40:59.545120 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdbrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fcc88c58b-57jrt_calico-apiserver(df1949f7-cac3-4cf6-8c60-f8d963a49163): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:40:59.546966 containerd[1612]: time="2026-01-28T01:40:59.546664476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:40:59.550134 kubelet[2938]: E0128 01:40:59.548854 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" podUID="df1949f7-cac3-4cf6-8c60-f8d963a49163" Jan 28 01:40:59.555904 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1608167b8d5a77019c1c3152bbf1cdb1e6b7e70266211189ee0553a74313f3c-rootfs.mount: Deactivated successfully. Jan 28 01:40:59.572716 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a8117195d9fae8d7be32af18157ae5c0e157949452e454c995b8da1f24b42cb-rootfs.mount: Deactivated successfully. Jan 28 01:40:59.696966 containerd[1612]: time="2026-01-28T01:40:59.694203407Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:40:59.722181 containerd[1612]: time="2026-01-28T01:40:59.711798331Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:40:59.722181 containerd[1612]: time="2026-01-28T01:40:59.711926068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 01:40:59.722181 containerd[1612]: time="2026-01-28T01:40:59.717087851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:40:59.722685 kubelet[2938]: E0128 01:40:59.713833 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:40:59.722685 kubelet[2938]: E0128 01:40:59.713905 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:40:59.722685 kubelet[2938]: E0128 01:40:59.714546 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqn7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-s28bh_calico-system(55f83d8e-e337-4a1b-9dba-8df114668f11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:40:59.722685 kubelet[2938]: E0128 01:40:59.719543 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s28bh" podUID="55f83d8e-e337-4a1b-9dba-8df114668f11" Jan 28 01:40:59.935213 kubelet[2938]: I0128 01:40:59.935115 2938 scope.go:117] "RemoveContainer" containerID="bca85ac4f8f009994a35d6e89f6ce4ac782c6cec4fd3bb8329adfd0bbf299d83" Jan 28 01:40:59.941879 kubelet[2938]: I0128 01:40:59.938879 2938 scope.go:117] "RemoveContainer" containerID="0a8117195d9fae8d7be32af18157ae5c0e157949452e454c995b8da1f24b42cb" Jan 28 01:40:59.941879 kubelet[2938]: E0128 01:40:59.938973 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:40:59.941879 kubelet[2938]: E0128 01:40:59.939117 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(73f4d0ebfe2f50199eb060021cc3bcbf)\"" pod="kube-system/kube-controller-manager-localhost" podUID="73f4d0ebfe2f50199eb060021cc3bcbf" Jan 28 01:40:59.965857 containerd[1612]: time="2026-01-28T01:40:59.964595557Z" level=info msg="RemoveContainer for \"bca85ac4f8f009994a35d6e89f6ce4ac782c6cec4fd3bb8329adfd0bbf299d83\"" Jan 28 01:40:59.967603 kubelet[2938]: I0128 01:40:59.966178 2938 scope.go:117] "RemoveContainer" containerID="e1608167b8d5a77019c1c3152bbf1cdb1e6b7e70266211189ee0553a74313f3c" Jan 28 01:41:00.036050 containerd[1612]: time="2026-01-28T01:41:00.033174234Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:41:00.062763 containerd[1612]: time="2026-01-28T01:41:00.056720002Z" level=info msg="CreateContainer within sandbox \"9620265baadedb1921ed90b2995e49d9956007ca01ce33021107b86d4fc2f5ba\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 28 01:41:00.083195 containerd[1612]: time="2026-01-28T01:41:00.082025243Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:41:00.084081 containerd[1612]: time="2026-01-28T01:41:00.084051525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 01:41:00.085829 kubelet[2938]: E0128 01:41:00.085516 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:41:00.085829 kubelet[2938]: E0128 01:41:00.085786 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:41:00.086148 kubelet[2938]: E0128 01:41:00.086086 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twgjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78b6655f44-dr84p_calico-system(477c43dc-f740-4bfd-b59c-255fe52c8673): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:41:00.090622 kubelet[2938]: I0128 01:41:00.088373 2938 scope.go:117] "RemoveContainer" containerID="714839389731feaa100ba85d309cb53c753aa71e6cfb737ef244adf739af06d5" Jan 28 01:41:00.097245 kubelet[2938]: E0128 01:41:00.091241 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" podUID="477c43dc-f740-4bfd-b59c-255fe52c8673" Jan 28 01:41:00.099052 containerd[1612]: time="2026-01-28T01:41:00.096747002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:41:00.105130 kubelet[2938]: E0128 01:41:00.095818 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:41:00.105130 kubelet[2938]: E0128 01:41:00.102058 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(0b8273f45c576ca70f8db6fe540c065c)\"" pod="kube-system/kube-scheduler-localhost" podUID="0b8273f45c576ca70f8db6fe540c065c" Jan 28 01:41:00.263074 containerd[1612]: time="2026-01-28T01:41:00.262939767Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:41:00.301820 containerd[1612]: time="2026-01-28T01:41:00.290081578Z" level=info msg="Container f5226db04d84196c50954d6c5294a3707eda3a87096dd2d010daf8148a32702c: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:41:00.301820 containerd[1612]: time="2026-01-28T01:41:00.296969845Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:41:00.301820 containerd[1612]: time="2026-01-28T01:41:00.297095480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 01:41:00.304541 kubelet[2938]: E0128 01:41:00.304423 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:41:00.307377 kubelet[2938]: E0128 01:41:00.304775 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:41:00.307377 kubelet[2938]: E0128 01:41:00.305054 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:49231b61cab941e2b913be4ad476f1ab,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ppmfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757b4d7df4-d9b2x_calico-system(a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:41:00.318012 containerd[1612]: time="2026-01-28T01:41:00.317730249Z" level=info msg="RemoveContainer for \"bca85ac4f8f009994a35d6e89f6ce4ac782c6cec4fd3bb8329adfd0bbf299d83\" returns successfully" Jan 28 01:41:00.319849 kubelet[2938]: I0128 01:41:00.319814 2938 scope.go:117] "RemoveContainer" containerID="7818e0f0981fabe3c589b306b9f44d735eba4dee07167d9aadb3ef62f716a2c6" Jan 28 01:41:00.334909 containerd[1612]: time="2026-01-28T01:41:00.333521683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:41:00.345100 containerd[1612]: time="2026-01-28T01:41:00.342188607Z" level=info msg="RemoveContainer for \"7818e0f0981fabe3c589b306b9f44d735eba4dee07167d9aadb3ef62f716a2c6\"" Jan 28 01:41:00.350511 containerd[1612]: time="2026-01-28T01:41:00.348034726Z" level=info msg="CreateContainer within sandbox \"9620265baadedb1921ed90b2995e49d9956007ca01ce33021107b86d4fc2f5ba\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"f5226db04d84196c50954d6c5294a3707eda3a87096dd2d010daf8148a32702c\"" Jan 28 01:41:00.353190 containerd[1612]: time="2026-01-28T01:41:00.352754102Z" level=info msg="StartContainer for \"f5226db04d84196c50954d6c5294a3707eda3a87096dd2d010daf8148a32702c\"" Jan 28 01:41:00.375667 containerd[1612]: time="2026-01-28T01:41:00.375011980Z" level=info msg="RemoveContainer for \"7818e0f0981fabe3c589b306b9f44d735eba4dee07167d9aadb3ef62f716a2c6\" returns successfully" Jan 28 01:41:00.378213 containerd[1612]: time="2026-01-28T01:41:00.377938028Z" level=info msg="connecting to shim f5226db04d84196c50954d6c5294a3707eda3a87096dd2d010daf8148a32702c" address="unix:///run/containerd/s/ce2346db093c1e336250475a67b2ed00b51378284e1795f33c44f422c5263262" protocol=ttrpc version=3 Jan 28 01:41:00.429696 containerd[1612]: time="2026-01-28T01:41:00.429554929Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:41:00.437538 containerd[1612]: time="2026-01-28T01:41:00.436245374Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:41:00.438197 containerd[1612]: time="2026-01-28T01:41:00.438043409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 01:41:00.441143 kubelet[2938]: E0128 01:41:00.438556 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:41:00.441143 kubelet[2938]: E0128 01:41:00.438625 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:41:00.441143 kubelet[2938]: E0128 01:41:00.438901 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwnhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4245v_calico-system(a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:41:00.441143 kubelet[2938]: E0128 01:41:00.440876 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:41:00.444635 containerd[1612]: time="2026-01-28T01:41:00.443887235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:41:00.512928 systemd[1]: Started cri-containerd-f5226db04d84196c50954d6c5294a3707eda3a87096dd2d010daf8148a32702c.scope - libcontainer container f5226db04d84196c50954d6c5294a3707eda3a87096dd2d010daf8148a32702c. Jan 28 01:41:00.565668 containerd[1612]: time="2026-01-28T01:41:00.563912634Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:41:00.571742 containerd[1612]: time="2026-01-28T01:41:00.571593608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 01:41:00.571742 containerd[1612]: time="2026-01-28T01:41:00.571705467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:41:00.574599 kubelet[2938]: E0128 01:41:00.572801 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:41:00.574599 kubelet[2938]: E0128 01:41:00.573204 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:41:00.574599 kubelet[2938]: E0128 01:41:00.574238 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppmfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757b4d7df4-d9b2x_calico-system(a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:41:00.576748 kubelet[2938]: E0128 01:41:00.576178 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-757b4d7df4-d9b2x" podUID="a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7" Jan 28 01:41:00.631000 audit: BPF prog-id=268 op=LOAD Jan 28 01:41:00.634000 audit: BPF prog-id=269 op=LOAD Jan 28 01:41:00.634000 audit[6813]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3086 pid=6813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:00.634000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635323236646230346438343139366335303935346436633532393461 Jan 28 01:41:00.636000 audit: BPF prog-id=269 op=UNLOAD Jan 28 01:41:00.636000 audit[6813]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=6813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:00.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635323236646230346438343139366335303935346436633532393461 Jan 28 01:41:00.636000 audit: BPF prog-id=270 op=LOAD Jan 28 01:41:00.636000 audit[6813]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3086 pid=6813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:00.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635323236646230346438343139366335303935346436633532393461 Jan 28 01:41:00.636000 audit: BPF prog-id=271 op=LOAD Jan 28 01:41:00.636000 audit[6813]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3086 pid=6813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:00.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635323236646230346438343139366335303935346436633532393461 Jan 28 01:41:00.639000 audit: BPF prog-id=271 op=UNLOAD Jan 28 01:41:00.639000 audit[6813]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=6813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:00.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635323236646230346438343139366335303935346436633532393461 Jan 28 01:41:00.639000 audit: BPF prog-id=270 op=UNLOAD Jan 28 01:41:00.639000 audit[6813]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=6813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:00.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635323236646230346438343139366335303935346436633532393461 Jan 28 01:41:00.639000 audit: BPF prog-id=272 op=LOAD Jan 28 01:41:00.639000 audit[6813]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3086 pid=6813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:00.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635323236646230346438343139366335303935346436633532393461 Jan 28 01:41:00.768999 containerd[1612]: time="2026-01-28T01:41:00.768715019Z" level=info msg="StartContainer for \"f5226db04d84196c50954d6c5294a3707eda3a87096dd2d010daf8148a32702c\" returns successfully" Jan 28 01:41:01.977033 kubelet[2938]: I0128 01:41:01.975884 2938 scope.go:117] "RemoveContainer" containerID="0a8117195d9fae8d7be32af18157ae5c0e157949452e454c995b8da1f24b42cb" Jan 28 01:41:01.977033 kubelet[2938]: E0128 01:41:01.976005 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:41:01.977033 kubelet[2938]: E0128 01:41:01.976389 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(73f4d0ebfe2f50199eb060021cc3bcbf)\"" pod="kube-system/kube-controller-manager-localhost" podUID="73f4d0ebfe2f50199eb060021cc3bcbf" Jan 28 01:41:04.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.88:22-10.0.0.1:47730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:04.483921 systemd[1]: Started sshd@22-10.0.0.88:22-10.0.0.1:47730.service - OpenSSH per-connection server daemon (10.0.0.1:47730). Jan 28 01:41:04.509121 kernel: kauditd_printk_skb: 24 callbacks suppressed Jan 28 01:41:04.512221 kernel: audit: type=1130 audit(1769564464.483:925): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.88:22-10.0.0.1:47730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:04.872000 audit[6847]: USER_ACCT pid=6847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:04.879069 sshd[6847]: Accepted publickey for core from 10.0.0.1 port 47730 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:41:04.879902 sshd-session[6847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:41:04.921739 systemd-logind[1594]: New session 23 of user core. Jan 28 01:41:04.932153 kernel: audit: type=1101 audit(1769564464.872:926): pid=6847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:04.932226 kernel: audit: type=1103 audit(1769564464.878:927): pid=6847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:04.878000 audit[6847]: CRED_ACQ pid=6847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:05.025388 kernel: audit: type=1006 audit(1769564464.878:928): pid=6847 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 28 01:41:04.878000 audit[6847]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe033801b0 a2=3 a3=0 items=0 ppid=1 pid=6847 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:05.026619 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 01:41:04.878000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:41:05.134904 kernel: audit: type=1300 audit(1769564464.878:928): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe033801b0 a2=3 a3=0 items=0 ppid=1 pid=6847 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:05.134992 kernel: audit: type=1327 audit(1769564464.878:928): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:41:05.135042 kernel: audit: type=1105 audit(1769564465.061:929): pid=6847 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:05.061000 audit[6847]: USER_START pid=6847 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:05.069000 audit[6851]: CRED_ACQ pid=6851 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:05.270945 kernel: audit: type=1103 audit(1769564465.069:930): pid=6851 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:05.668129 sshd[6851]: Connection closed by 10.0.0.1 port 47730 Jan 28 01:41:05.672757 sshd-session[6847]: pam_unix(sshd:session): session closed for user core Jan 28 01:41:05.683000 audit[6847]: USER_END pid=6847 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:05.702882 systemd-logind[1594]: Session 23 logged out. Waiting for processes to exit. Jan 28 01:41:05.703391 systemd[1]: sshd@22-10.0.0.88:22-10.0.0.1:47730.service: Deactivated successfully. Jan 28 01:41:05.724240 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 01:41:05.746230 systemd-logind[1594]: Removed session 23. Jan 28 01:41:05.688000 audit[6847]: CRED_DISP pid=6847 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:05.806766 kernel: audit: type=1106 audit(1769564465.683:931): pid=6847 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:05.806869 kernel: audit: type=1104 audit(1769564465.688:932): pid=6847 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:05.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.88:22-10.0.0.1:47730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:07.755003 kubelet[2938]: I0128 01:41:07.739617 2938 scope.go:117] "RemoveContainer" containerID="714839389731feaa100ba85d309cb53c753aa71e6cfb737ef244adf739af06d5" Jan 28 01:41:07.755003 kubelet[2938]: E0128 01:41:07.739738 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:41:07.755003 kubelet[2938]: E0128 01:41:07.739877 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(0b8273f45c576ca70f8db6fe540c065c)\"" pod="kube-system/kube-scheduler-localhost" podUID="0b8273f45c576ca70f8db6fe540c065c" Jan 28 01:41:10.378983 kubelet[2938]: E0128 01:41:10.378061 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s28bh" podUID="55f83d8e-e337-4a1b-9dba-8df114668f11" Jan 28 01:41:10.378983 kubelet[2938]: E0128 01:41:10.378920 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" podUID="477c43dc-f740-4bfd-b59c-255fe52c8673" Jan 28 01:41:10.384403 kubelet[2938]: E0128 01:41:10.384038 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" podUID="38764aa9-f6ea-4a8f-ac0e-198fa6f97144" Jan 28 01:41:10.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.88:22-10.0.0.1:47740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:10.710232 systemd[1]: Started sshd@23-10.0.0.88:22-10.0.0.1:47740.service - OpenSSH per-connection server daemon (10.0.0.1:47740). Jan 28 01:41:10.721796 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:41:10.722004 kernel: audit: type=1130 audit(1769564470.710:934): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.88:22-10.0.0.1:47740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:10.940000 audit[6871]: USER_ACCT pid=6871 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:10.946810 sshd[6871]: Accepted publickey for core from 10.0.0.1 port 47740 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:41:10.950074 sshd-session[6871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:41:10.974843 systemd-logind[1594]: New session 24 of user core. Jan 28 01:41:10.946000 audit[6871]: CRED_ACQ pid=6871 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:11.060452 kernel: audit: type=1101 audit(1769564470.940:935): pid=6871 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:11.060677 kernel: audit: type=1103 audit(1769564470.946:936): pid=6871 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:11.060978 kernel: audit: type=1006 audit(1769564470.946:937): pid=6871 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 28 01:41:11.086941 kernel: audit: type=1300 audit(1769564470.946:937): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc69fc27f0 a2=3 a3=0 items=0 ppid=1 pid=6871 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:10.946000 audit[6871]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc69fc27f0 a2=3 a3=0 items=0 ppid=1 pid=6871 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:10.946000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:41:11.122732 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 01:41:11.142910 kernel: audit: type=1327 audit(1769564470.946:937): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:41:11.143000 audit[6871]: USER_START pid=6871 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:11.187445 kernel: audit: type=1105 audit(1769564471.143:938): pid=6871 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:11.157000 audit[6875]: CRED_ACQ pid=6875 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:11.235683 kernel: audit: type=1103 audit(1769564471.157:939): pid=6875 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:11.392813 kubelet[2938]: E0128 01:41:11.391071 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:41:11.716805 sshd[6875]: Connection closed by 10.0.0.1 port 47740 Jan 28 01:41:11.715705 sshd-session[6871]: pam_unix(sshd:session): session closed for user core Jan 28 01:41:11.716000 audit[6871]: USER_END pid=6871 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:11.727940 systemd-logind[1594]: Session 24 logged out. Waiting for processes to exit. Jan 28 01:41:11.742081 systemd[1]: sshd@23-10.0.0.88:22-10.0.0.1:47740.service: Deactivated successfully. Jan 28 01:41:11.752073 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 01:41:11.753425 kernel: audit: type=1106 audit(1769564471.716:940): pid=6871 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:11.719000 audit[6871]: CRED_DISP pid=6871 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:11.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.88:22-10.0.0.1:47740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:11.783400 kernel: audit: type=1104 audit(1769564471.719:941): pid=6871 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:11.794051 systemd-logind[1594]: Removed session 24. Jan 28 01:41:12.380396 kubelet[2938]: E0128 01:41:12.379707 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" podUID="df1949f7-cac3-4cf6-8c60-f8d963a49163" Jan 28 01:41:15.381993 kubelet[2938]: I0128 01:41:15.381857 2938 scope.go:117] "RemoveContainer" containerID="0a8117195d9fae8d7be32af18157ae5c0e157949452e454c995b8da1f24b42cb" Jan 28 01:41:15.381993 kubelet[2938]: E0128 01:41:15.381959 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:41:15.383167 kubelet[2938]: E0128 01:41:15.382985 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:41:15.422888 kubelet[2938]: E0128 01:41:15.422828 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-757b4d7df4-d9b2x" podUID="a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7" Jan 28 01:41:15.429455 containerd[1612]: time="2026-01-28T01:41:15.427632883Z" level=info msg="CreateContainer within sandbox \"646a081358b454bfc4ed890ccfcbe5ac5ad7f82c77034df4b4ca181abd77a0c1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Jan 28 01:41:15.651007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3427658801.mount: Deactivated successfully. Jan 28 01:41:15.744611 containerd[1612]: time="2026-01-28T01:41:15.739139080Z" level=info msg="Container 32a5c6a546c52f5e89e14ff2d4a452b65bfa3d2011222ab6fe67a25df5245691: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:41:15.778100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1880965471.mount: Deactivated successfully. Jan 28 01:41:15.841633 containerd[1612]: time="2026-01-28T01:41:15.839422377Z" level=info msg="CreateContainer within sandbox \"646a081358b454bfc4ed890ccfcbe5ac5ad7f82c77034df4b4ca181abd77a0c1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"32a5c6a546c52f5e89e14ff2d4a452b65bfa3d2011222ab6fe67a25df5245691\"" Jan 28 01:41:15.846800 containerd[1612]: time="2026-01-28T01:41:15.846760648Z" level=info msg="StartContainer for \"32a5c6a546c52f5e89e14ff2d4a452b65bfa3d2011222ab6fe67a25df5245691\"" Jan 28 01:41:15.850224 containerd[1612]: time="2026-01-28T01:41:15.850074442Z" level=info msg="connecting to shim 32a5c6a546c52f5e89e14ff2d4a452b65bfa3d2011222ab6fe67a25df5245691" address="unix:///run/containerd/s/9f1abb83d31b9d22e8e0e3b7999dce2fc512d5c9d1c633bf2f7c52ea2b0b2d95" protocol=ttrpc version=3 Jan 28 01:41:16.039936 systemd[1]: Started cri-containerd-32a5c6a546c52f5e89e14ff2d4a452b65bfa3d2011222ab6fe67a25df5245691.scope - libcontainer container 32a5c6a546c52f5e89e14ff2d4a452b65bfa3d2011222ab6fe67a25df5245691. Jan 28 01:41:16.111000 audit: BPF prog-id=273 op=LOAD Jan 28 01:41:16.130176 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:41:16.130415 kernel: audit: type=1334 audit(1769564476.111:943): prog-id=273 op=LOAD Jan 28 01:41:16.145434 kernel: audit: type=1334 audit(1769564476.116:944): prog-id=274 op=LOAD Jan 28 01:41:16.116000 audit: BPF prog-id=274 op=LOAD Jan 28 01:41:16.116000 audit[6890]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2638 pid=6890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:16.161160 kernel: audit: type=1300 audit(1769564476.116:944): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2638 pid=6890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:16.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332613563366135343663353266356538396531346666326434613435 Jan 28 01:41:16.238725 kernel: audit: type=1327 audit(1769564476.116:944): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332613563366135343663353266356538396531346666326434613435 Jan 28 01:41:16.239022 kernel: audit: type=1334 audit(1769564476.116:945): prog-id=274 op=UNLOAD Jan 28 01:41:16.116000 audit: BPF prog-id=274 op=UNLOAD Jan 28 01:41:16.116000 audit[6890]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=6890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:16.285472 kernel: audit: type=1300 audit(1769564476.116:945): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=6890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:16.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332613563366135343663353266356538396531346666326434613435 Jan 28 01:41:16.323858 kernel: audit: type=1327 audit(1769564476.116:945): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332613563366135343663353266356538396531346666326434613435 Jan 28 01:41:16.116000 audit: BPF prog-id=275 op=LOAD Jan 28 01:41:16.348196 kernel: audit: type=1334 audit(1769564476.116:946): prog-id=275 op=LOAD Jan 28 01:41:16.348440 kernel: audit: type=1300 audit(1769564476.116:946): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2638 pid=6890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:16.116000 audit[6890]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2638 pid=6890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:16.412411 kernel: audit: type=1327 audit(1769564476.116:946): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332613563366135343663353266356538396531346666326434613435 Jan 28 01:41:16.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332613563366135343663353266356538396531346666326434613435 Jan 28 01:41:16.433058 containerd[1612]: time="2026-01-28T01:41:16.430756109Z" level=info msg="StartContainer for \"32a5c6a546c52f5e89e14ff2d4a452b65bfa3d2011222ab6fe67a25df5245691\" returns successfully" Jan 28 01:41:16.116000 audit: BPF prog-id=276 op=LOAD Jan 28 01:41:16.116000 audit[6890]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2638 pid=6890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:16.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332613563366135343663353266356538396531346666326434613435 Jan 28 01:41:16.116000 audit: BPF prog-id=276 op=UNLOAD Jan 28 01:41:16.116000 audit[6890]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=6890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:16.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332613563366135343663353266356538396531346666326434613435 Jan 28 01:41:16.116000 audit: BPF prog-id=275 op=UNLOAD Jan 28 01:41:16.116000 audit[6890]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=6890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:16.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332613563366135343663353266356538396531346666326434613435 Jan 28 01:41:16.116000 audit: BPF prog-id=277 op=LOAD Jan 28 01:41:16.116000 audit[6890]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2638 pid=6890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:16.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332613563366135343663353266356538396531346666326434613435 Jan 28 01:41:16.709939 kubelet[2938]: E0128 01:41:16.709069 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:41:16.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.88:22-10.0.0.1:51638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:16.797837 systemd[1]: Started sshd@24-10.0.0.88:22-10.0.0.1:51638.service - OpenSSH per-connection server daemon (10.0.0.1:51638). Jan 28 01:41:17.228000 audit[6922]: USER_ACCT pid=6922 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:17.234755 sshd[6922]: Accepted publickey for core from 10.0.0.1 port 51638 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:41:17.235000 audit[6922]: CRED_ACQ pid=6922 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:17.235000 audit[6922]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff40b270d0 a2=3 a3=0 items=0 ppid=1 pid=6922 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:17.235000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:41:17.254239 sshd-session[6922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:41:17.331031 systemd-logind[1594]: New session 25 of user core. Jan 28 01:41:17.369040 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 01:41:17.429000 audit[6922]: USER_START pid=6922 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:17.475000 audit[6933]: CRED_ACQ pid=6933 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:18.272916 sshd[6933]: Connection closed by 10.0.0.1 port 51638 Jan 28 01:41:18.277000 audit[6922]: USER_END pid=6922 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:18.277000 audit[6922]: CRED_DISP pid=6922 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:18.278077 sshd-session[6922]: pam_unix(sshd:session): session closed for user core Jan 28 01:41:18.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.88:22-10.0.0.1:51638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:18.303109 systemd[1]: sshd@24-10.0.0.88:22-10.0.0.1:51638.service: Deactivated successfully. Jan 28 01:41:18.326448 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 01:41:18.333228 systemd-logind[1594]: Session 25 logged out. Waiting for processes to exit. Jan 28 01:41:18.341049 systemd-logind[1594]: Removed session 25. Jan 28 01:41:19.373827 kubelet[2938]: I0128 01:41:19.371702 2938 scope.go:117] "RemoveContainer" containerID="714839389731feaa100ba85d309cb53c753aa71e6cfb737ef244adf739af06d5" Jan 28 01:41:19.373827 kubelet[2938]: E0128 01:41:19.371876 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:41:19.427897 containerd[1612]: time="2026-01-28T01:41:19.421080514Z" level=info msg="CreateContainer within sandbox \"a56bd6992403f89973ada7ad3355f99bf77cb3f3034dd5fa7cff9dcc493606df\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Jan 28 01:41:19.557826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1253447470.mount: Deactivated successfully. Jan 28 01:41:19.595869 containerd[1612]: time="2026-01-28T01:41:19.582205346Z" level=info msg="Container 954c837f3478e8639aa86bd0356bc7712d3ccf0337cc3d07bc3f58ab10c4f5bb: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:41:19.728844 containerd[1612]: time="2026-01-28T01:41:19.726643069Z" level=info msg="CreateContainer within sandbox \"a56bd6992403f89973ada7ad3355f99bf77cb3f3034dd5fa7cff9dcc493606df\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"954c837f3478e8639aa86bd0356bc7712d3ccf0337cc3d07bc3f58ab10c4f5bb\"" Jan 28 01:41:19.729020 containerd[1612]: time="2026-01-28T01:41:19.728977946Z" level=info msg="StartContainer for \"954c837f3478e8639aa86bd0356bc7712d3ccf0337cc3d07bc3f58ab10c4f5bb\"" Jan 28 01:41:19.778478 containerd[1612]: time="2026-01-28T01:41:19.778242807Z" level=info msg="connecting to shim 954c837f3478e8639aa86bd0356bc7712d3ccf0337cc3d07bc3f58ab10c4f5bb" address="unix:///run/containerd/s/0cf8acde16ffb195c6f80d9b444f4d007e850459187eab2f54d297c8d217061b" protocol=ttrpc version=3 Jan 28 01:41:19.902829 systemd[1]: Started cri-containerd-954c837f3478e8639aa86bd0356bc7712d3ccf0337cc3d07bc3f58ab10c4f5bb.scope - libcontainer container 954c837f3478e8639aa86bd0356bc7712d3ccf0337cc3d07bc3f58ab10c4f5bb. Jan 28 01:41:19.969000 audit: BPF prog-id=278 op=LOAD Jan 28 01:41:19.977000 audit: BPF prog-id=279 op=LOAD Jan 28 01:41:19.977000 audit[6947]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2611 pid=6947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:19.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935346338333766333437386538363339616138366264303335366263 Jan 28 01:41:19.977000 audit: BPF prog-id=279 op=UNLOAD Jan 28 01:41:19.977000 audit[6947]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=6947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:19.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935346338333766333437386538363339616138366264303335366263 Jan 28 01:41:19.977000 audit: BPF prog-id=280 op=LOAD Jan 28 01:41:19.977000 audit[6947]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2611 pid=6947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:19.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935346338333766333437386538363339616138366264303335366263 Jan 28 01:41:19.977000 audit: BPF prog-id=281 op=LOAD Jan 28 01:41:19.977000 audit[6947]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2611 pid=6947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:19.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935346338333766333437386538363339616138366264303335366263 Jan 28 01:41:19.977000 audit: BPF prog-id=281 op=UNLOAD Jan 28 01:41:19.977000 audit[6947]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=6947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:19.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935346338333766333437386538363339616138366264303335366263 Jan 28 01:41:19.977000 audit: BPF prog-id=280 op=UNLOAD Jan 28 01:41:19.977000 audit[6947]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=6947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:19.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935346338333766333437386538363339616138366264303335366263 Jan 28 01:41:19.977000 audit: BPF prog-id=282 op=LOAD Jan 28 01:41:19.977000 audit[6947]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2611 pid=6947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:19.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935346338333766333437386538363339616138366264303335366263 Jan 28 01:41:20.225437 containerd[1612]: time="2026-01-28T01:41:20.220678258Z" level=info msg="StartContainer for \"954c837f3478e8639aa86bd0356bc7712d3ccf0337cc3d07bc3f58ab10c4f5bb\" returns successfully" Jan 28 01:41:20.822758 kubelet[2938]: E0128 01:41:20.818951 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:41:21.394452 kubelet[2938]: E0128 01:41:21.394191 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" podUID="477c43dc-f740-4bfd-b59c-255fe52c8673" Jan 28 01:41:21.823156 kubelet[2938]: E0128 01:41:21.822902 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:41:25.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.88:22-10.0.0.1:59690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:25.292894 kernel: kauditd_printk_skb: 45 callbacks suppressed Jan 28 01:41:25.293014 kernel: audit: type=1130 audit(1769564485.233:968): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.88:22-10.0.0.1:59690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:25.233476 systemd[1]: Started sshd@25-10.0.0.88:22-10.0.0.1:59690.service - OpenSSH per-connection server daemon (10.0.0.1:59690). Jan 28 01:41:25.432745 kubelet[2938]: E0128 01:41:25.431045 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:41:25.441424 kubelet[2938]: E0128 01:41:25.439665 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:41:25.478491 kubelet[2938]: E0128 01:41:25.478440 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s28bh" podUID="55f83d8e-e337-4a1b-9dba-8df114668f11" Jan 28 01:41:25.486412 kubelet[2938]: E0128 01:41:25.481880 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" podUID="38764aa9-f6ea-4a8f-ac0e-198fa6f97144" Jan 28 01:41:25.496517 kubelet[2938]: E0128 01:41:25.495872 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:41:25.520975 kubelet[2938]: E0128 01:41:25.520867 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:41:25.521723 kubelet[2938]: E0128 01:41:25.521195 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:41:26.564495 kubelet[2938]: E0128 01:41:26.564433 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" podUID="df1949f7-cac3-4cf6-8c60-f8d963a49163" Jan 28 01:41:26.702411 kernel: audit: type=1101 audit(1769564486.620:969): pid=6980 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:26.620000 audit[6980]: USER_ACCT pid=6980 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:26.678484 sshd-session[6980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:41:26.715842 sshd[6980]: Accepted publickey for core from 10.0.0.1 port 59690 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:41:26.659000 audit[6980]: CRED_ACQ pid=6980 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:26.814190 kernel: audit: type=1103 audit(1769564486.659:970): pid=6980 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:26.814523 kernel: audit: type=1006 audit(1769564486.659:971): pid=6980 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 28 01:41:26.820479 kernel: audit: type=1300 audit(1769564486.659:971): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd99605970 a2=3 a3=0 items=0 ppid=1 pid=6980 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:26.659000 audit[6980]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd99605970 a2=3 a3=0 items=0 ppid=1 pid=6980 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:26.872045 systemd-logind[1594]: New session 26 of user core. Jan 28 01:41:26.911701 kernel: audit: type=1327 audit(1769564486.659:971): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:41:26.659000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:41:26.955199 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 01:41:26.981000 audit[6980]: USER_START pid=6980 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:27.095520 kernel: audit: type=1105 audit(1769564486.981:972): pid=6980 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:27.095824 kernel: audit: type=1103 audit(1769564486.995:973): pid=6986 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:26.995000 audit[6986]: CRED_ACQ pid=6986 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:27.632218 kubelet[2938]: E0128 01:41:27.629984 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-757b4d7df4-d9b2x" podUID="a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7" Jan 28 01:41:27.815382 kubelet[2938]: E0128 01:41:27.814900 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:41:27.864018 sshd[6986]: Connection closed by 10.0.0.1 port 59690 Jan 28 01:41:27.864022 sshd-session[6980]: pam_unix(sshd:session): session closed for user core Jan 28 01:41:27.921000 audit[6980]: USER_END pid=6980 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:27.979352 kernel: audit: type=1106 audit(1769564487.921:974): pid=6980 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:27.979629 kernel: audit: type=1104 audit(1769564487.954:975): pid=6980 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:27.954000 audit[6980]: CRED_DISP pid=6980 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:27.984916 systemd[1]: sshd@25-10.0.0.88:22-10.0.0.1:59690.service: Deactivated successfully. Jan 28 01:41:28.000250 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 01:41:28.013914 systemd-logind[1594]: Session 26 logged out. Waiting for processes to exit. Jan 28 01:41:27.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.88:22-10.0.0.1:59690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:28.029458 systemd-logind[1594]: Removed session 26. Jan 28 01:41:32.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.88:22-10.0.0.1:45588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:32.910546 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:41:32.910725 kernel: audit: type=1130 audit(1769564492.905:977): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.88:22-10.0.0.1:45588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:32.905415 systemd[1]: Started sshd@26-10.0.0.88:22-10.0.0.1:45588.service - OpenSSH per-connection server daemon (10.0.0.1:45588). Jan 28 01:41:33.331000 audit[7035]: USER_ACCT pid=7035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:33.337622 sshd[7035]: Accepted publickey for core from 10.0.0.1 port 45588 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:41:33.349747 sshd-session[7035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:41:33.382683 kernel: audit: type=1101 audit(1769564493.331:978): pid=7035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:33.341000 audit[7035]: CRED_ACQ pid=7035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:33.408066 systemd-logind[1594]: New session 27 of user core. Jan 28 01:41:33.452493 kernel: audit: type=1103 audit(1769564493.341:979): pid=7035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:33.452723 kernel: audit: type=1006 audit(1769564493.341:980): pid=7035 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 28 01:41:33.452781 kernel: audit: type=1300 audit(1769564493.341:980): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd641cb320 a2=3 a3=0 items=0 ppid=1 pid=7035 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:33.341000 audit[7035]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd641cb320 a2=3 a3=0 items=0 ppid=1 pid=7035 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:33.490848 kernel: audit: type=1327 audit(1769564493.341:980): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:41:33.341000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:41:33.510112 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 28 01:41:33.554000 audit[7035]: USER_START pid=7035 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:33.601867 kernel: audit: type=1105 audit(1769564493.554:981): pid=7035 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:33.602150 kernel: audit: type=1103 audit(1769564493.566:982): pid=7039 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:33.566000 audit[7039]: CRED_ACQ pid=7039 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:33.952000 audit[7035]: USER_END pid=7035 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:33.955861 sshd[7039]: Connection closed by 10.0.0.1 port 45588 Jan 28 01:41:33.948145 sshd-session[7035]: pam_unix(sshd:session): session closed for user core Jan 28 01:41:33.963071 systemd[1]: sshd@26-10.0.0.88:22-10.0.0.1:45588.service: Deactivated successfully. Jan 28 01:41:33.994627 systemd[1]: session-27.scope: Deactivated successfully. Jan 28 01:41:34.010187 kernel: audit: type=1106 audit(1769564493.952:983): pid=7035 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:34.009481 systemd-logind[1594]: Session 27 logged out. Waiting for processes to exit. Jan 28 01:41:34.012510 systemd-logind[1594]: Removed session 27. Jan 28 01:41:33.952000 audit[7035]: CRED_DISP pid=7035 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:34.053164 kernel: audit: type=1104 audit(1769564493.952:984): pid=7035 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:33.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.88:22-10.0.0.1:45588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:34.388054 kubelet[2938]: E0128 01:41:34.387843 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" podUID="477c43dc-f740-4bfd-b59c-255fe52c8673" Jan 28 01:41:35.514949 kubelet[2938]: E0128 01:41:35.514507 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:41:37.375990 kubelet[2938]: E0128 01:41:37.375764 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" podUID="38764aa9-f6ea-4a8f-ac0e-198fa6f97144" Jan 28 01:41:37.816157 kubelet[2938]: E0128 01:41:37.814832 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:41:37.930413 kubelet[2938]: E0128 01:41:37.923697 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:41:38.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.88:22-10.0.0.1:45602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:38.993713 systemd[1]: Started sshd@27-10.0.0.88:22-10.0.0.1:45602.service - OpenSSH per-connection server daemon (10.0.0.1:45602). Jan 28 01:41:39.026903 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:41:39.027016 kernel: audit: type=1130 audit(1769564498.993:986): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.88:22-10.0.0.1:45602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:39.285000 audit[7074]: USER_ACCT pid=7074 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:39.288455 sshd[7074]: Accepted publickey for core from 10.0.0.1 port 45602 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:41:39.296934 sshd-session[7074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:41:39.290000 audit[7074]: CRED_ACQ pid=7074 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:39.316027 systemd-logind[1594]: New session 28 of user core. Jan 28 01:41:39.337965 kernel: audit: type=1101 audit(1769564499.285:987): pid=7074 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:39.338099 kernel: audit: type=1103 audit(1769564499.290:988): pid=7074 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:39.355719 kernel: audit: type=1006 audit(1769564499.291:989): pid=7074 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 28 01:41:39.359533 kernel: audit: type=1300 audit(1769564499.291:989): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7c507090 a2=3 a3=0 items=0 ppid=1 pid=7074 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:39.291000 audit[7074]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7c507090 a2=3 a3=0 items=0 ppid=1 pid=7074 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:39.385639 kubelet[2938]: E0128 01:41:39.385535 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s28bh" podUID="55f83d8e-e337-4a1b-9dba-8df114668f11" Jan 28 01:41:39.291000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:41:39.401441 kernel: audit: type=1327 audit(1769564499.291:989): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:41:39.403143 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 28 01:41:39.420000 audit[7074]: USER_START pid=7074 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:39.426000 audit[7079]: CRED_ACQ pid=7079 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:39.487485 kernel: audit: type=1105 audit(1769564499.420:990): pid=7074 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:39.487688 kernel: audit: type=1103 audit(1769564499.426:991): pid=7079 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:39.856222 sshd[7079]: Connection closed by 10.0.0.1 port 45602 Jan 28 01:41:39.861000 audit[7074]: USER_END pid=7074 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:39.862393 sshd-session[7074]: pam_unix(sshd:session): session closed for user core Jan 28 01:41:39.885969 systemd[1]: sshd@27-10.0.0.88:22-10.0.0.1:45602.service: Deactivated successfully. Jan 28 01:41:39.913226 systemd[1]: session-28.scope: Deactivated successfully. Jan 28 01:41:39.929165 systemd-logind[1594]: Session 28 logged out. Waiting for processes to exit. Jan 28 01:41:39.937685 systemd-logind[1594]: Removed session 28. Jan 28 01:41:39.951806 kernel: audit: type=1106 audit(1769564499.861:992): pid=7074 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:39.951953 kernel: audit: type=1104 audit(1769564499.869:993): pid=7074 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:39.869000 audit[7074]: CRED_DISP pid=7074 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:39.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.88:22-10.0.0.1:45602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:40.376891 containerd[1612]: time="2026-01-28T01:41:40.375491458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:41:40.513361 containerd[1612]: time="2026-01-28T01:41:40.513114752Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:41:40.522837 containerd[1612]: time="2026-01-28T01:41:40.521200831Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:41:40.522837 containerd[1612]: time="2026-01-28T01:41:40.521366774Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 01:41:40.523023 kubelet[2938]: E0128 01:41:40.521764 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:41:40.523023 kubelet[2938]: E0128 01:41:40.521829 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:41:40.523023 kubelet[2938]: E0128 01:41:40.522046 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwnhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4245v_calico-system(a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:41:40.530451 containerd[1612]: time="2026-01-28T01:41:40.529844522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:41:40.641570 containerd[1612]: time="2026-01-28T01:41:40.640969029Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:41:40.667899 containerd[1612]: time="2026-01-28T01:41:40.664847641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 01:41:40.667899 containerd[1612]: time="2026-01-28T01:41:40.665042715Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:41:40.668180 kubelet[2938]: E0128 01:41:40.665487 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:41:40.668180 kubelet[2938]: E0128 01:41:40.665552 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:41:40.668180 kubelet[2938]: E0128 01:41:40.665771 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwnhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4245v_calico-system(a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:41:40.669422 kubelet[2938]: E0128 01:41:40.669114 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:41:41.387013 containerd[1612]: time="2026-01-28T01:41:41.386569893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:41:41.504360 containerd[1612]: time="2026-01-28T01:41:41.503570730Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:41:41.505745 containerd[1612]: time="2026-01-28T01:41:41.505526220Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:41:41.505745 containerd[1612]: time="2026-01-28T01:41:41.505697389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:41:41.508174 kubelet[2938]: E0128 01:41:41.505985 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:41:41.508174 kubelet[2938]: E0128 01:41:41.506050 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:41:41.508983 kubelet[2938]: E0128 01:41:41.506439 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdbrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fcc88c58b-57jrt_calico-apiserver(df1949f7-cac3-4cf6-8c60-f8d963a49163): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:41:41.509933 containerd[1612]: time="2026-01-28T01:41:41.509524971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:41:41.510145 kubelet[2938]: E0128 01:41:41.510116 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" podUID="df1949f7-cac3-4cf6-8c60-f8d963a49163" Jan 28 01:41:41.703794 containerd[1612]: time="2026-01-28T01:41:41.703250347Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:41:41.707368 containerd[1612]: time="2026-01-28T01:41:41.707107953Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:41:41.707700 containerd[1612]: time="2026-01-28T01:41:41.707412722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 01:41:41.708346 kubelet[2938]: E0128 01:41:41.708045 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:41:41.708346 kubelet[2938]: E0128 01:41:41.708162 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:41:41.708906 kubelet[2938]: E0128 01:41:41.708417 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:49231b61cab941e2b913be4ad476f1ab,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ppmfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757b4d7df4-d9b2x_calico-system(a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:41:41.713811 containerd[1612]: time="2026-01-28T01:41:41.713579299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:41:41.920216 containerd[1612]: time="2026-01-28T01:41:41.919370212Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:41:41.935784 containerd[1612]: time="2026-01-28T01:41:41.933475658Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:41:41.935784 containerd[1612]: time="2026-01-28T01:41:41.934231889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 01:41:41.936237 kubelet[2938]: E0128 01:41:41.936123 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:41:41.936237 kubelet[2938]: E0128 01:41:41.936197 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:41:41.940742 kubelet[2938]: E0128 01:41:41.936488 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppmfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757b4d7df4-d9b2x_calico-system(a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:41:41.940742 kubelet[2938]: E0128 01:41:41.938396 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-757b4d7df4-d9b2x" podUID="a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7" Jan 28 01:41:43.081040 containerd[1612]: time="2026-01-28T01:41:43.078548472Z" level=info msg="container event discarded" container=7818e0f0981fabe3c589b306b9f44d735eba4dee07167d9aadb3ef62f716a2c6 type=CONTAINER_STOPPED_EVENT Jan 28 01:41:43.091237 containerd[1612]: time="2026-01-28T01:41:43.084233861Z" level=info msg="container event discarded" container=bca85ac4f8f009994a35d6e89f6ce4ac782c6cec4fd3bb8329adfd0bbf299d83 type=CONTAINER_STOPPED_EVENT Jan 28 01:41:44.380757 kubelet[2938]: E0128 01:41:44.374789 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:41:44.901068 containerd[1612]: time="2026-01-28T01:41:44.900994674Z" level=info msg="container event discarded" container=0a8117195d9fae8d7be32af18157ae5c0e157949452e454c995b8da1f24b42cb type=CONTAINER_CREATED_EVENT Jan 28 01:41:44.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.88:22-10.0.0.1:38402 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:44.927177 systemd[1]: Started sshd@28-10.0.0.88:22-10.0.0.1:38402.service - OpenSSH per-connection server daemon (10.0.0.1:38402). Jan 28 01:41:44.942000 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:41:44.942115 kernel: audit: type=1130 audit(1769564504.926:995): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.88:22-10.0.0.1:38402 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:44.986001 containerd[1612]: time="2026-01-28T01:41:44.983841415Z" level=info msg="container event discarded" container=714839389731feaa100ba85d309cb53c753aa71e6cfb737ef244adf739af06d5 type=CONTAINER_CREATED_EVENT Jan 28 01:41:45.212000 audit[7096]: USER_ACCT pid=7096 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:45.219786 sshd[7096]: Accepted publickey for core from 10.0.0.1 port 38402 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:41:45.224523 sshd-session[7096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:41:45.271836 systemd-logind[1594]: New session 29 of user core. Jan 28 01:41:45.294566 kernel: audit: type=1101 audit(1769564505.212:996): pid=7096 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:45.294782 kernel: audit: type=1103 audit(1769564505.215:997): pid=7096 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:45.215000 audit[7096]: CRED_ACQ pid=7096 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:45.306042 kernel: audit: type=1006 audit(1769564505.215:998): pid=7096 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 28 01:41:45.306175 kernel: audit: type=1300 audit(1769564505.215:998): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd0e102b0 a2=3 a3=0 items=0 ppid=1 pid=7096 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:45.215000 audit[7096]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd0e102b0 a2=3 a3=0 items=0 ppid=1 pid=7096 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:45.327581 kernel: audit: type=1327 audit(1769564505.215:998): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:41:45.215000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:41:45.336791 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 28 01:41:45.361000 audit[7096]: USER_START pid=7096 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:45.438409 kernel: audit: type=1105 audit(1769564505.361:999): pid=7096 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:45.438604 kernel: audit: type=1103 audit(1769564505.373:1000): pid=7100 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:45.373000 audit[7100]: CRED_ACQ pid=7100 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:45.447964 containerd[1612]: time="2026-01-28T01:41:45.446202696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:41:45.589134 containerd[1612]: time="2026-01-28T01:41:45.586890509Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:41:45.600420 containerd[1612]: time="2026-01-28T01:41:45.598911031Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:41:45.600420 containerd[1612]: time="2026-01-28T01:41:45.599034851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 01:41:45.610464 kubelet[2938]: E0128 01:41:45.603735 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:41:45.610464 kubelet[2938]: E0128 01:41:45.603886 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:41:45.610464 kubelet[2938]: E0128 01:41:45.604071 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twgjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78b6655f44-dr84p_calico-system(477c43dc-f740-4bfd-b59c-255fe52c8673): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:41:45.610464 kubelet[2938]: E0128 01:41:45.610028 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" podUID="477c43dc-f740-4bfd-b59c-255fe52c8673" Jan 28 01:41:45.987792 sshd[7100]: Connection closed by 10.0.0.1 port 38402 Jan 28 01:41:45.989694 sshd-session[7096]: pam_unix(sshd:session): session closed for user core Jan 28 01:41:46.005000 audit[7096]: USER_END pid=7096 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:46.094466 kernel: audit: type=1106 audit(1769564506.005:1001): pid=7096 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:46.094676 kernel: audit: type=1104 audit(1769564506.005:1002): pid=7096 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:46.005000 audit[7096]: CRED_DISP pid=7096 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:46.099132 containerd[1612]: time="2026-01-28T01:41:46.098558121Z" level=info msg="container event discarded" container=0a8117195d9fae8d7be32af18157ae5c0e157949452e454c995b8da1f24b42cb type=CONTAINER_STARTED_EVENT Jan 28 01:41:46.099948 containerd[1612]: time="2026-01-28T01:41:46.099898914Z" level=info msg="container event discarded" container=714839389731feaa100ba85d309cb53c753aa71e6cfb737ef244adf739af06d5 type=CONTAINER_STARTED_EVENT Jan 28 01:41:46.109059 systemd[1]: sshd@28-10.0.0.88:22-10.0.0.1:38402.service: Deactivated successfully. Jan 28 01:41:46.125833 systemd[1]: session-29.scope: Deactivated successfully. Jan 28 01:41:46.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.88:22-10.0.0.1:38402 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:46.154476 systemd-logind[1594]: Session 29 logged out. Waiting for processes to exit. Jan 28 01:41:46.157567 systemd-logind[1594]: Removed session 29. Jan 28 01:41:50.377235 containerd[1612]: time="2026-01-28T01:41:50.376978075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:41:50.514241 containerd[1612]: time="2026-01-28T01:41:50.513818651Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:41:50.522744 containerd[1612]: time="2026-01-28T01:41:50.519706529Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:41:50.522744 containerd[1612]: time="2026-01-28T01:41:50.519862770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:41:50.522950 kubelet[2938]: E0128 01:41:50.520089 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:41:50.522950 kubelet[2938]: E0128 01:41:50.520149 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:41:50.522950 kubelet[2938]: E0128 01:41:50.521204 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ckwxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fcc88c58b-n2mcr_calico-apiserver(38764aa9-f6ea-4a8f-ac0e-198fa6f97144): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:41:50.524181 kubelet[2938]: E0128 01:41:50.524126 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" podUID="38764aa9-f6ea-4a8f-ac0e-198fa6f97144" Jan 28 01:41:51.042245 systemd[1]: Started sshd@29-10.0.0.88:22-10.0.0.1:38416.service - OpenSSH per-connection server daemon (10.0.0.1:38416). Jan 28 01:41:51.080766 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:41:51.080911 kernel: audit: type=1130 audit(1769564511.042:1004): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.88:22-10.0.0.1:38416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:51.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.88:22-10.0.0.1:38416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:51.320000 audit[7121]: USER_ACCT pid=7121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:51.337725 sshd[7121]: Accepted publickey for core from 10.0.0.1 port 38416 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:41:51.341223 sshd-session[7121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:41:51.365125 systemd-logind[1594]: New session 30 of user core. Jan 28 01:41:51.337000 audit[7121]: CRED_ACQ pid=7121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:51.417354 kernel: audit: type=1101 audit(1769564511.320:1005): pid=7121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:51.417486 kernel: audit: type=1103 audit(1769564511.337:1006): pid=7121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:51.556097 kernel: audit: type=1006 audit(1769564511.337:1007): pid=7121 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 28 01:41:51.793038 kernel: audit: type=1300 audit(1769564511.337:1007): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd9381d570 a2=3 a3=0 items=0 ppid=1 pid=7121 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:52.091860 kernel: audit: type=1327 audit(1769564511.337:1007): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:41:51.337000 audit[7121]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd9381d570 a2=3 a3=0 items=0 ppid=1 pid=7121 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:51.337000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:41:51.676100 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 28 01:41:52.360000 audit[7121]: USER_START pid=7121 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:52.413728 kubelet[2938]: E0128 01:41:52.371727 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" podUID="df1949f7-cac3-4cf6-8c60-f8d963a49163" Jan 28 01:41:52.428501 kernel: audit: type=1105 audit(1769564512.360:1008): pid=7121 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:52.368000 audit[7125]: CRED_ACQ pid=7125 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:52.503364 kernel: audit: type=1103 audit(1769564512.368:1009): pid=7125 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:53.088910 sshd[7125]: Connection closed by 10.0.0.1 port 38416 Jan 28 01:41:53.104172 sshd-session[7121]: pam_unix(sshd:session): session closed for user core Jan 28 01:41:53.108000 audit[7121]: USER_END pid=7121 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:53.137408 systemd[1]: sshd@29-10.0.0.88:22-10.0.0.1:38416.service: Deactivated successfully. Jan 28 01:41:53.169434 systemd[1]: session-30.scope: Deactivated successfully. Jan 28 01:41:53.108000 audit[7121]: CRED_DISP pid=7121 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:53.179037 systemd-logind[1594]: Session 30 logged out. Waiting for processes to exit. Jan 28 01:41:53.185823 systemd-logind[1594]: Removed session 30. Jan 28 01:41:53.194462 systemd[1]: Started sshd@30-10.0.0.88:22-10.0.0.1:34930.service - OpenSSH per-connection server daemon (10.0.0.1:34930). Jan 28 01:41:53.228445 kernel: audit: type=1106 audit(1769564513.108:1010): pid=7121 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:53.228583 kernel: audit: type=1104 audit(1769564513.108:1011): pid=7121 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:53.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.88:22-10.0.0.1:38416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:53.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.88:22-10.0.0.1:34930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:53.401144 kubelet[2938]: E0128 01:41:53.384385 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:41:53.420391 containerd[1612]: time="2026-01-28T01:41:53.411617988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:41:53.555240 containerd[1612]: time="2026-01-28T01:41:53.553615467Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:41:53.563835 containerd[1612]: time="2026-01-28T01:41:53.563619956Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:41:53.563835 containerd[1612]: time="2026-01-28T01:41:53.563797227Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 01:41:53.564585 kubelet[2938]: E0128 01:41:53.564496 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:41:53.564585 kubelet[2938]: E0128 01:41:53.564562 2938 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:41:53.573848 kubelet[2938]: E0128 01:41:53.566169 2938 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqn7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-s28bh_calico-system(55f83d8e-e337-4a1b-9dba-8df114668f11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:41:53.573848 kubelet[2938]: E0128 01:41:53.567427 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s28bh" podUID="55f83d8e-e337-4a1b-9dba-8df114668f11" Jan 28 01:41:53.753960 sshd[7138]: Accepted publickey for core from 10.0.0.1 port 34930 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:41:53.752000 audit[7138]: USER_ACCT pid=7138 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:53.757000 audit[7138]: CRED_ACQ pid=7138 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:53.757000 audit[7138]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbeb8b650 a2=3 a3=0 items=0 ppid=1 pid=7138 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:53.757000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:41:53.765229 sshd-session[7138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:41:53.811996 systemd-logind[1594]: New session 31 of user core. Jan 28 01:41:53.825017 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 28 01:41:53.846000 audit[7138]: USER_START pid=7138 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:53.861000 audit[7143]: CRED_ACQ pid=7143 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:55.354126 sshd[7143]: Connection closed by 10.0.0.1 port 34930 Jan 28 01:41:55.359830 sshd-session[7138]: pam_unix(sshd:session): session closed for user core Jan 28 01:41:55.386196 kubelet[2938]: E0128 01:41:55.386078 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-757b4d7df4-d9b2x" podUID="a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7" Jan 28 01:41:55.387000 audit[7138]: USER_END pid=7138 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:55.390000 audit[7138]: CRED_DISP pid=7138 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:55.403921 systemd[1]: Started sshd@31-10.0.0.88:22-10.0.0.1:34940.service - OpenSSH per-connection server daemon (10.0.0.1:34940). Jan 28 01:41:55.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.88:22-10.0.0.1:34940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:55.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.88:22-10.0.0.1:34930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:55.404994 systemd[1]: sshd@30-10.0.0.88:22-10.0.0.1:34930.service: Deactivated successfully. Jan 28 01:41:55.416720 systemd[1]: session-31.scope: Deactivated successfully. Jan 28 01:41:55.442198 systemd-logind[1594]: Session 31 logged out. Waiting for processes to exit. Jan 28 01:41:55.492639 systemd-logind[1594]: Removed session 31. Jan 28 01:41:55.784000 audit[7153]: USER_ACCT pid=7153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:55.786207 sshd[7153]: Accepted publickey for core from 10.0.0.1 port 34940 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:41:55.790453 sshd-session[7153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:41:55.785000 audit[7153]: CRED_ACQ pid=7153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:55.785000 audit[7153]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd291ec0a0 a2=3 a3=0 items=0 ppid=1 pid=7153 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:55.785000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:41:55.849163 systemd-logind[1594]: New session 32 of user core. Jan 28 01:41:55.871642 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 28 01:41:55.889000 audit[7153]: USER_START pid=7153 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:55.896000 audit[7162]: CRED_ACQ pid=7162 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:56.378571 kubelet[2938]: E0128 01:41:56.374772 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" podUID="477c43dc-f740-4bfd-b59c-255fe52c8673" Jan 28 01:41:58.934017 sshd[7162]: Connection closed by 10.0.0.1 port 34940 Jan 28 01:41:58.931481 sshd-session[7153]: pam_unix(sshd:session): session closed for user core Jan 28 01:41:59.004529 kernel: kauditd_printk_skb: 20 callbacks suppressed Jan 28 01:41:59.006903 kernel: audit: type=1106 audit(1769564518.975:1028): pid=7153 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:58.975000 audit[7153]: USER_END pid=7153 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:59.008586 systemd[1]: sshd@31-10.0.0.88:22-10.0.0.1:34940.service: Deactivated successfully. Jan 28 01:41:59.025932 systemd[1]: session-32.scope: Deactivated successfully. Jan 28 01:41:59.028020 systemd[1]: session-32.scope: Consumed 1.047s CPU time, 45.2M memory peak. Jan 28 01:41:59.034928 systemd-logind[1594]: Session 32 logged out. Waiting for processes to exit. Jan 28 01:41:58.975000 audit[7153]: CRED_DISP pid=7153 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:59.097014 systemd[1]: Started sshd@32-10.0.0.88:22-10.0.0.1:34946.service - OpenSSH per-connection server daemon (10.0.0.1:34946). Jan 28 01:41:59.106822 systemd-logind[1594]: Removed session 32. Jan 28 01:41:59.134374 kernel: audit: type=1104 audit(1769564518.975:1029): pid=7153 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:59.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.88:22-10.0.0.1:34940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:59.216500 kernel: audit: type=1131 audit(1769564519.004:1030): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.88:22-10.0.0.1:34940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:59.059000 audit[7197]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=7197 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:41:59.265730 kernel: audit: type=1325 audit(1769564519.059:1031): table=filter:144 family=2 entries=26 op=nft_register_rule pid=7197 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:41:59.059000 audit[7197]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe08205920 a2=0 a3=7ffe0820590c items=0 ppid=3042 pid=7197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:59.407246 kernel: audit: type=1300 audit(1769564519.059:1031): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe08205920 a2=0 a3=7ffe0820590c items=0 ppid=3042 pid=7197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:59.407506 kernel: audit: type=1327 audit(1769564519.059:1031): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:41:59.059000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:41:59.092000 audit[7197]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=7197 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:41:59.478396 kernel: audit: type=1325 audit(1769564519.092:1032): table=nat:145 family=2 entries=20 op=nft_register_rule pid=7197 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:41:59.092000 audit[7197]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe08205920 a2=0 a3=0 items=0 ppid=3042 pid=7197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:59.553005 kernel: audit: type=1300 audit(1769564519.092:1032): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe08205920 a2=0 a3=0 items=0 ppid=3042 pid=7197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:59.553146 kernel: audit: type=1327 audit(1769564519.092:1032): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:41:59.092000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:41:59.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.88:22-10.0.0.1:34946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:59.603349 kernel: audit: type=1130 audit(1769564519.092:1033): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.88:22-10.0.0.1:34946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:41:59.303000 audit[7209]: NETFILTER_CFG table=filter:146 family=2 entries=38 op=nft_register_rule pid=7209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:41:59.303000 audit[7209]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe9f40c730 a2=0 a3=7ffe9f40c71c items=0 ppid=3042 pid=7209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:59.303000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:41:59.423000 audit[7209]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=7209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:41:59.423000 audit[7209]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe9f40c730 a2=0 a3=0 items=0 ppid=3042 pid=7209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:59.423000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:41:59.626000 audit[7201]: USER_ACCT pid=7201 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:59.630465 sshd[7201]: Accepted publickey for core from 10.0.0.1 port 34946 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:41:59.632000 audit[7201]: CRED_ACQ pid=7201 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:59.632000 audit[7201]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6d71ffc0 a2=3 a3=0 items=0 ppid=1 pid=7201 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:41:59.632000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:41:59.636524 sshd-session[7201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:41:59.683645 systemd-logind[1594]: New session 33 of user core. Jan 28 01:41:59.699055 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 28 01:41:59.785000 audit[7201]: USER_START pid=7201 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:41:59.816000 audit[7214]: CRED_ACQ pid=7214 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:00.370936 kubelet[2938]: E0128 01:42:00.369140 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:42:01.648000 audit[7201]: USER_END pid=7201 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:01.648000 audit[7201]: CRED_DISP pid=7201 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:01.658225 sshd[7214]: Connection closed by 10.0.0.1 port 34946 Jan 28 01:42:01.633578 sshd-session[7201]: pam_unix(sshd:session): session closed for user core Jan 28 01:42:01.669899 systemd[1]: sshd@32-10.0.0.88:22-10.0.0.1:34946.service: Deactivated successfully. Jan 28 01:42:01.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.88:22-10.0.0.1:34946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:42:01.682829 systemd[1]: session-33.scope: Deactivated successfully. Jan 28 01:42:01.691086 systemd-logind[1594]: Session 33 logged out. Waiting for processes to exit. Jan 28 01:42:01.696388 systemd-logind[1594]: Removed session 33. Jan 28 01:42:01.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.88:22-10.0.0.1:34958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:42:01.706461 systemd[1]: Started sshd@33-10.0.0.88:22-10.0.0.1:34958.service - OpenSSH per-connection server daemon (10.0.0.1:34958). Jan 28 01:42:01.894000 audit[7226]: USER_ACCT pid=7226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:01.896876 sshd[7226]: Accepted publickey for core from 10.0.0.1 port 34958 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:42:01.903000 audit[7226]: CRED_ACQ pid=7226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:01.903000 audit[7226]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd2e5d7790 a2=3 a3=0 items=0 ppid=1 pid=7226 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:42:01.903000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:42:01.905618 sshd-session[7226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:42:01.951222 systemd-logind[1594]: New session 34 of user core. Jan 28 01:42:01.969649 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 28 01:42:02.022000 audit[7226]: USER_START pid=7226 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:02.039000 audit[7230]: CRED_ACQ pid=7230 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:02.778089 sshd[7230]: Connection closed by 10.0.0.1 port 34958 Jan 28 01:42:02.771090 sshd-session[7226]: pam_unix(sshd:session): session closed for user core Jan 28 01:42:02.772000 audit[7226]: USER_END pid=7226 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:02.773000 audit[7226]: CRED_DISP pid=7226 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:02.795472 systemd[1]: sshd@33-10.0.0.88:22-10.0.0.1:34958.service: Deactivated successfully. Jan 28 01:42:02.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.88:22-10.0.0.1:34958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:42:02.809192 systemd[1]: session-34.scope: Deactivated successfully. Jan 28 01:42:02.824408 systemd-logind[1594]: Session 34 logged out. Waiting for processes to exit. Jan 28 01:42:02.861538 systemd-logind[1594]: Removed session 34. Jan 28 01:42:04.377244 kubelet[2938]: E0128 01:42:04.375433 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" podUID="df1949f7-cac3-4cf6-8c60-f8d963a49163" Jan 28 01:42:05.466956 kubelet[2938]: E0128 01:42:05.465816 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" podUID="38764aa9-f6ea-4a8f-ac0e-198fa6f97144" Jan 28 01:42:06.412445 kubelet[2938]: E0128 01:42:06.411428 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:42:07.388870 kubelet[2938]: E0128 01:42:07.386009 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s28bh" podUID="55f83d8e-e337-4a1b-9dba-8df114668f11" Jan 28 01:42:07.817674 systemd[1]: Started sshd@34-10.0.0.88:22-10.0.0.1:32772.service - OpenSSH per-connection server daemon (10.0.0.1:32772). Jan 28 01:42:07.858577 kernel: kauditd_printk_skb: 27 callbacks suppressed Jan 28 01:42:07.858987 kernel: audit: type=1130 audit(1769564527.825:1053): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.88:22-10.0.0.1:32772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:42:07.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.88:22-10.0.0.1:32772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:42:08.178000 audit[7243]: USER_ACCT pid=7243 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:08.245865 systemd-logind[1594]: New session 35 of user core. Jan 28 01:42:08.196031 sshd-session[7243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:42:08.260840 sshd[7243]: Accepted publickey for core from 10.0.0.1 port 32772 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:42:08.282372 kernel: audit: type=1101 audit(1769564528.178:1054): pid=7243 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:08.187000 audit[7243]: CRED_ACQ pid=7243 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:08.299949 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 28 01:42:08.329418 kernel: audit: type=1103 audit(1769564528.187:1055): pid=7243 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:08.187000 audit[7243]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd834e6830 a2=3 a3=0 items=0 ppid=1 pid=7243 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:42:08.449030 kernel: audit: type=1006 audit(1769564528.187:1056): pid=7243 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=35 res=1 Jan 28 01:42:08.449123 kernel: audit: type=1300 audit(1769564528.187:1056): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd834e6830 a2=3 a3=0 items=0 ppid=1 pid=7243 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:42:08.460568 kernel: audit: type=1327 audit(1769564528.187:1056): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:42:08.187000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:42:08.460776 kubelet[2938]: E0128 01:42:08.447367 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" podUID="477c43dc-f740-4bfd-b59c-255fe52c8673" Jan 28 01:42:08.483591 kernel: audit: type=1105 audit(1769564528.358:1057): pid=7243 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:08.358000 audit[7243]: USER_START pid=7243 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:08.507026 kubelet[2938]: E0128 01:42:08.505578 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-757b4d7df4-d9b2x" podUID="a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7" Jan 28 01:42:08.663244 kernel: audit: type=1103 audit(1769564528.371:1058): pid=7247 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:08.371000 audit[7247]: CRED_ACQ pid=7247 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:09.330359 sshd[7247]: Connection closed by 10.0.0.1 port 32772 Jan 28 01:42:09.337685 sshd-session[7243]: pam_unix(sshd:session): session closed for user core Jan 28 01:42:09.346000 audit[7243]: USER_END pid=7243 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:09.381558 systemd[1]: sshd@34-10.0.0.88:22-10.0.0.1:32772.service: Deactivated successfully. Jan 28 01:42:09.427358 systemd[1]: session-35.scope: Deactivated successfully. Jan 28 01:42:09.446934 kernel: audit: type=1106 audit(1769564529.346:1059): pid=7243 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:09.447054 kernel: audit: type=1104 audit(1769564529.346:1060): pid=7243 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:09.346000 audit[7243]: CRED_DISP pid=7243 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:09.464892 systemd-logind[1594]: Session 35 logged out. Waiting for processes to exit. Jan 28 01:42:09.482855 systemd-logind[1594]: Removed session 35. Jan 28 01:42:09.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.88:22-10.0.0.1:32772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:42:14.414476 systemd[1]: Started sshd@35-10.0.0.88:22-10.0.0.1:57560.service - OpenSSH per-connection server daemon (10.0.0.1:57560). Jan 28 01:42:14.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.88:22-10.0.0.1:57560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:42:14.433909 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:42:14.434001 kernel: audit: type=1130 audit(1769564534.418:1062): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.88:22-10.0.0.1:57560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:42:14.707000 audit[7263]: USER_ACCT pid=7263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:14.714809 sshd[7263]: Accepted publickey for core from 10.0.0.1 port 57560 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:42:14.722887 sshd-session[7263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:42:14.713000 audit[7263]: CRED_ACQ pid=7263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:14.796236 systemd-logind[1594]: New session 36 of user core. Jan 28 01:42:14.827439 kernel: audit: type=1101 audit(1769564534.707:1063): pid=7263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:14.827570 kernel: audit: type=1103 audit(1769564534.713:1064): pid=7263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:14.827619 kernel: audit: type=1006 audit(1769564534.713:1065): pid=7263 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=36 res=1 Jan 28 01:42:14.713000 audit[7263]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2b3cd420 a2=3 a3=0 items=0 ppid=1 pid=7263 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:42:14.877663 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 28 01:42:14.919420 kernel: audit: type=1300 audit(1769564534.713:1065): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2b3cd420 a2=3 a3=0 items=0 ppid=1 pid=7263 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:42:14.713000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:42:14.970927 kernel: audit: type=1327 audit(1769564534.713:1065): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:42:14.971152 kernel: audit: type=1105 audit(1769564534.904:1066): pid=7263 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:14.904000 audit[7263]: USER_START pid=7263 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:15.035456 kernel: audit: type=1103 audit(1769564534.914:1067): pid=7267 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:14.914000 audit[7267]: CRED_ACQ pid=7267 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:15.491622 kubelet[2938]: E0128 01:42:15.487677 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" podUID="df1949f7-cac3-4cf6-8c60-f8d963a49163" Jan 28 01:42:15.586150 sshd[7267]: Connection closed by 10.0.0.1 port 57560 Jan 28 01:42:15.610395 sshd-session[7263]: pam_unix(sshd:session): session closed for user core Jan 28 01:42:15.684000 audit[7263]: USER_END pid=7263 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:15.736025 systemd[1]: sshd@35-10.0.0.88:22-10.0.0.1:57560.service: Deactivated successfully. Jan 28 01:42:15.797165 systemd[1]: session-36.scope: Deactivated successfully. Jan 28 01:42:15.689000 audit[7263]: CRED_DISP pid=7263 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:15.837464 kernel: audit: type=1106 audit(1769564535.684:1068): pid=7263 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:15.837592 kernel: audit: type=1104 audit(1769564535.689:1069): pid=7263 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:15.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.88:22-10.0.0.1:57560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:42:15.836231 systemd-logind[1594]: Session 36 logged out. Waiting for processes to exit. Jan 28 01:42:15.857801 systemd-logind[1594]: Removed session 36. Jan 28 01:42:18.382223 kubelet[2938]: E0128 01:42:18.381711 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s28bh" podUID="55f83d8e-e337-4a1b-9dba-8df114668f11" Jan 28 01:42:19.386713 kubelet[2938]: E0128 01:42:19.386556 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-757b4d7df4-d9b2x" podUID="a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7" Jan 28 01:42:19.386713 kubelet[2938]: E0128 01:42:19.386662 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:42:20.406056 kubelet[2938]: E0128 01:42:20.394471 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" podUID="38764aa9-f6ea-4a8f-ac0e-198fa6f97144" Jan 28 01:42:20.659010 systemd[1]: Started sshd@36-10.0.0.88:22-10.0.0.1:57570.service - OpenSSH per-connection server daemon (10.0.0.1:57570). Jan 28 01:42:20.757720 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:42:20.757959 kernel: audit: type=1130 audit(1769564540.664:1071): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.88:22-10.0.0.1:57570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:42:20.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.88:22-10.0.0.1:57570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:42:21.108000 audit[7280]: USER_ACCT pid=7280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:21.113493 sshd-session[7280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:42:21.127561 sshd[7280]: Accepted publickey for core from 10.0.0.1 port 57570 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:42:21.176651 kernel: audit: type=1101 audit(1769564541.108:1072): pid=7280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:21.177066 kernel: audit: type=1103 audit(1769564541.110:1073): pid=7280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:21.110000 audit[7280]: CRED_ACQ pid=7280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:21.237596 kernel: audit: type=1006 audit(1769564541.110:1074): pid=7280 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=37 res=1 Jan 28 01:42:21.237723 kernel: audit: type=1300 audit(1769564541.110:1074): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2ca5f760 a2=3 a3=0 items=0 ppid=1 pid=7280 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:42:21.110000 audit[7280]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2ca5f760 a2=3 a3=0 items=0 ppid=1 pid=7280 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:42:21.221003 systemd-logind[1594]: New session 37 of user core. Jan 28 01:42:21.110000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:42:21.257414 kernel: audit: type=1327 audit(1769564541.110:1074): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:42:21.270459 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 28 01:42:21.321000 audit[7280]: USER_START pid=7280 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:21.457131 kernel: audit: type=1105 audit(1769564541.321:1075): pid=7280 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:21.460000 audit[7284]: CRED_ACQ pid=7284 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:21.522822 kernel: audit: type=1103 audit(1769564541.460:1076): pid=7284 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:22.362091 sshd[7284]: Connection closed by 10.0.0.1 port 57570 Jan 28 01:42:22.362206 sshd-session[7280]: pam_unix(sshd:session): session closed for user core Jan 28 01:42:22.398060 kubelet[2938]: E0128 01:42:22.395592 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" podUID="477c43dc-f740-4bfd-b59c-255fe52c8673" Jan 28 01:42:22.388000 audit[7280]: USER_END pid=7280 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:22.482956 systemd[1]: sshd@36-10.0.0.88:22-10.0.0.1:57570.service: Deactivated successfully. Jan 28 01:42:22.499149 systemd[1]: session-37.scope: Deactivated successfully. Jan 28 01:42:22.534033 systemd-logind[1594]: Session 37 logged out. Waiting for processes to exit. Jan 28 01:42:22.541939 systemd-logind[1594]: Removed session 37. Jan 28 01:42:22.561619 kernel: audit: type=1106 audit(1769564542.388:1077): pid=7280 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:22.401000 audit[7280]: CRED_DISP pid=7280 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:22.659372 kernel: audit: type=1104 audit(1769564542.401:1078): pid=7280 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:22.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.88:22-10.0.0.1:57570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:42:23.003668 containerd[1612]: time="2026-01-28T01:42:23.003496630Z" level=info msg="container event discarded" container=ed53d82c8ecd012d99aa538e3d742078e8ce092d8470b22ef7ced7e65a93468e type=CONTAINER_CREATED_EVENT Jan 28 01:42:23.003668 containerd[1612]: time="2026-01-28T01:42:23.003563975Z" level=info msg="container event discarded" container=ed53d82c8ecd012d99aa538e3d742078e8ce092d8470b22ef7ced7e65a93468e type=CONTAINER_STARTED_EVENT Jan 28 01:42:24.906000 audit[7298]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=7298 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:42:24.906000 audit[7298]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffa74f6f60 a2=0 a3=7fffa74f6f4c items=0 ppid=3042 pid=7298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:42:24.906000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:42:24.936000 audit[7298]: NETFILTER_CFG table=nat:149 family=2 entries=104 op=nft_register_chain pid=7298 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:42:24.936000 audit[7298]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fffa74f6f60 a2=0 a3=7fffa74f6f4c items=0 ppid=3042 pid=7298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:42:24.936000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:42:25.376051 kubelet[2938]: E0128 01:42:25.373315 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:42:26.371077 kubelet[2938]: E0128 01:42:26.369719 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" podUID="df1949f7-cac3-4cf6-8c60-f8d963a49163" Jan 28 01:42:27.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.88:22-10.0.0.1:46192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:42:27.460927 systemd[1]: Started sshd@37-10.0.0.88:22-10.0.0.1:46192.service - OpenSSH per-connection server daemon (10.0.0.1:46192). Jan 28 01:42:27.481562 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 28 01:42:27.481654 kernel: audit: type=1130 audit(1769564547.460:1082): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.88:22-10.0.0.1:46192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:42:27.898000 audit[7302]: USER_ACCT pid=7302 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:27.915477 sshd[7302]: Accepted publickey for core from 10.0.0.1 port 46192 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:42:27.913939 sshd-session[7302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:42:27.958083 systemd-logind[1594]: New session 38 of user core. Jan 28 01:42:27.972436 kernel: audit: type=1101 audit(1769564547.898:1083): pid=7302 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:27.910000 audit[7302]: CRED_ACQ pid=7302 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:28.035485 kernel: audit: type=1103 audit(1769564547.910:1084): pid=7302 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:28.035629 kernel: audit: type=1006 audit(1769564547.910:1085): pid=7302 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=38 res=1 Jan 28 01:42:28.063424 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 28 01:42:27.910000 audit[7302]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd37b83d90 a2=3 a3=0 items=0 ppid=1 pid=7302 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:42:27.910000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:42:28.130965 kernel: audit: type=1300 audit(1769564547.910:1085): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd37b83d90 a2=3 a3=0 items=0 ppid=1 pid=7302 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:42:28.136100 kernel: audit: type=1327 audit(1769564547.910:1085): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:42:28.136159 kernel: audit: type=1105 audit(1769564548.103:1086): pid=7302 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:28.103000 audit[7302]: USER_START pid=7302 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:28.122000 audit[7306]: CRED_ACQ pid=7306 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:28.218901 kernel: audit: type=1103 audit(1769564548.122:1087): pid=7306 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:28.328565 containerd[1612]: time="2026-01-28T01:42:28.321918521Z" level=info msg="container event discarded" container=2f42d9fe6c6670e570b6233f6cbcc960576781ed1cbfde21da1a0a8756ad1352 type=CONTAINER_CREATED_EVENT Jan 28 01:42:28.328565 containerd[1612]: time="2026-01-28T01:42:28.324158261Z" level=info msg="container event discarded" container=2f42d9fe6c6670e570b6233f6cbcc960576781ed1cbfde21da1a0a8756ad1352 type=CONTAINER_STARTED_EVENT Jan 28 01:42:28.698934 sshd[7306]: Connection closed by 10.0.0.1 port 46192 Jan 28 01:42:28.705626 sshd-session[7302]: pam_unix(sshd:session): session closed for user core Jan 28 01:42:28.864453 kernel: audit: type=1106 audit(1769564548.719:1088): pid=7302 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:28.719000 audit[7302]: USER_END pid=7302 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:28.719000 audit[7302]: CRED_DISP pid=7302 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:28.884959 systemd[1]: sshd@37-10.0.0.88:22-10.0.0.1:46192.service: Deactivated successfully. Jan 28 01:42:28.913548 kernel: audit: type=1104 audit(1769564548.719:1089): pid=7302 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:28.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.88:22-10.0.0.1:46192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:42:29.067089 systemd[1]: session-38.scope: Deactivated successfully. Jan 28 01:42:29.137050 systemd-logind[1594]: Session 38 logged out. Waiting for processes to exit. Jan 28 01:42:29.163548 systemd-logind[1594]: Removed session 38. Jan 28 01:42:29.388977 kubelet[2938]: E0128 01:42:29.385499 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s28bh" podUID="55f83d8e-e337-4a1b-9dba-8df114668f11" Jan 28 01:42:30.382213 kubelet[2938]: E0128 01:42:30.379144 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:42:30.422089 kubelet[2938]: E0128 01:42:30.419426 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:42:31.430219 kubelet[2938]: E0128 01:42:31.423497 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-757b4d7df4-d9b2x" podUID="a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7" Jan 28 01:42:31.430219 kubelet[2938]: E0128 01:42:31.426624 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:42:33.392009 kubelet[2938]: E0128 01:42:33.390061 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" podUID="477c43dc-f740-4bfd-b59c-255fe52c8673" Jan 28 01:42:33.784502 systemd[1]: Started sshd@38-10.0.0.88:22-10.0.0.1:46130.service - OpenSSH per-connection server daemon (10.0.0.1:46130). Jan 28 01:42:33.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.88:22-10.0.0.1:46130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:42:33.811646 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:42:33.811776 kernel: audit: type=1130 audit(1769564553.784:1091): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.88:22-10.0.0.1:46130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:42:34.213000 audit[7347]: USER_ACCT pid=7347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:34.219622 sshd[7347]: Accepted publickey for core from 10.0.0.1 port 46130 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:42:34.231055 sshd-session[7347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:42:34.301690 systemd-logind[1594]: New session 39 of user core. Jan 28 01:42:34.222000 audit[7347]: CRED_ACQ pid=7347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:34.373563 kernel: audit: type=1101 audit(1769564554.213:1092): pid=7347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:34.373643 kernel: audit: type=1103 audit(1769564554.222:1093): pid=7347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:34.388932 kubelet[2938]: E0128 01:42:34.385848 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" podUID="38764aa9-f6ea-4a8f-ac0e-198fa6f97144" Jan 28 01:42:34.515991 kernel: audit: type=1006 audit(1769564554.222:1094): pid=7347 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=39 res=1 Jan 28 01:42:34.222000 audit[7347]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebedc8e10 a2=3 a3=0 items=0 ppid=1 pid=7347 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=39 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:42:34.594869 kernel: audit: type=1300 audit(1769564554.222:1094): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebedc8e10 a2=3 a3=0 items=0 ppid=1 pid=7347 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=39 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:42:34.595039 kernel: audit: type=1327 audit(1769564554.222:1094): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:42:34.222000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:42:34.605580 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 28 01:42:34.659000 audit[7347]: USER_START pid=7347 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:34.818374 kernel: audit: type=1105 audit(1769564554.659:1095): pid=7347 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:34.818533 kernel: audit: type=1103 audit(1769564554.679:1096): pid=7351 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:34.679000 audit[7351]: CRED_ACQ pid=7351 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:35.587366 sshd[7351]: Connection closed by 10.0.0.1 port 46130 Jan 28 01:42:35.590755 sshd-session[7347]: pam_unix(sshd:session): session closed for user core Jan 28 01:42:35.597000 audit[7347]: USER_END pid=7347 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:35.619575 systemd[1]: sshd@38-10.0.0.88:22-10.0.0.1:46130.service: Deactivated successfully. Jan 28 01:42:35.627249 systemd[1]: session-39.scope: Deactivated successfully. Jan 28 01:42:35.660070 systemd-logind[1594]: Session 39 logged out. Waiting for processes to exit. Jan 28 01:42:35.671033 systemd-logind[1594]: Removed session 39. Jan 28 01:42:35.691575 kernel: audit: type=1106 audit(1769564555.597:1097): pid=7347 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:35.691680 kernel: audit: type=1104 audit(1769564555.598:1098): pid=7347 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:35.598000 audit[7347]: CRED_DISP pid=7347 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:35.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.88:22-10.0.0.1:46130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:42:40.297927 containerd[1612]: time="2026-01-28T01:42:40.297770482Z" level=info msg="container event discarded" container=75d2f2549c2366635033a0bce51007d5a100479ddcab72f021d3e8fd8f4964d3 type=CONTAINER_CREATED_EVENT Jan 28 01:42:40.371541 kubelet[2938]: E0128 01:42:40.371507 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:42:40.378920 kubelet[2938]: E0128 01:42:40.378588 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s28bh" podUID="55f83d8e-e337-4a1b-9dba-8df114668f11" Jan 28 01:42:40.675700 systemd[1]: Started sshd@39-10.0.0.88:22-10.0.0.1:46144.service - OpenSSH per-connection server daemon (10.0.0.1:46144). Jan 28 01:42:40.705439 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:42:40.706246 kernel: audit: type=1130 audit(1769564560.675:1100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.88:22-10.0.0.1:46144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:42:40.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.88:22-10.0.0.1:46144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:42:40.893164 containerd[1612]: time="2026-01-28T01:42:40.892118646Z" level=info msg="container event discarded" container=75d2f2549c2366635033a0bce51007d5a100479ddcab72f021d3e8fd8f4964d3 type=CONTAINER_STARTED_EVENT Jan 28 01:42:41.317647 sshd[7366]: Accepted publickey for core from 10.0.0.1 port 46144 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:42:41.317000 audit[7366]: USER_ACCT pid=7366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:41.360742 sshd-session[7366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:42:41.394411 kernel: audit: type=1101 audit(1769564561.317:1101): pid=7366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:41.337000 audit[7366]: CRED_ACQ pid=7366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:41.443065 kubelet[2938]: E0128 01:42:41.421232 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-57jrt" podUID="df1949f7-cac3-4cf6-8c60-f8d963a49163" Jan 28 01:42:41.445086 kernel: audit: type=1103 audit(1769564561.337:1102): pid=7366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:41.338000 audit[7366]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd21ec0f0 a2=3 a3=0 items=0 ppid=1 pid=7366 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=40 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:42:41.520901 kernel: audit: type=1006 audit(1769564561.338:1103): pid=7366 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=40 res=1 Jan 28 01:42:41.520991 kernel: audit: type=1300 audit(1769564561.338:1103): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd21ec0f0 a2=3 a3=0 items=0 ppid=1 pid=7366 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=40 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:42:41.521115 kernel: audit: type=1327 audit(1769564561.338:1103): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:42:41.338000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:42:41.519421 systemd-logind[1594]: New session 40 of user core. Jan 28 01:42:41.571057 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 28 01:42:41.613000 audit[7366]: USER_START pid=7366 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:41.776650 kernel: audit: type=1105 audit(1769564561.613:1104): pid=7366 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:41.776762 kernel: audit: type=1103 audit(1769564561.658:1105): pid=7370 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:41.658000 audit[7370]: CRED_ACQ pid=7370 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:42.418395 kubelet[2938]: E0128 01:42:42.417927 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4245v" podUID="a7aeb9de-99dc-45ef-b9ad-d9f2afb967ef" Jan 28 01:42:42.438607 containerd[1612]: time="2026-01-28T01:42:42.438478794Z" level=info msg="container event discarded" container=d2ab8dd654be4139f5461f1432a080b825579e37ace95079cfff72b1099f0bfc type=CONTAINER_CREATED_EVENT Jan 28 01:42:42.503148 sshd[7370]: Connection closed by 10.0.0.1 port 46144 Jan 28 01:42:42.504609 sshd-session[7366]: pam_unix(sshd:session): session closed for user core Jan 28 01:42:42.512000 audit[7366]: USER_END pid=7366 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:42.525367 systemd[1]: sshd@39-10.0.0.88:22-10.0.0.1:46144.service: Deactivated successfully. Jan 28 01:42:42.526033 systemd-logind[1594]: Session 40 logged out. Waiting for processes to exit. Jan 28 01:42:42.536099 systemd[1]: session-40.scope: Deactivated successfully. Jan 28 01:42:42.557790 systemd-logind[1594]: Removed session 40. Jan 28 01:42:42.590643 kernel: audit: type=1106 audit(1769564562.512:1106): pid=7366 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:42.590793 kernel: audit: type=1104 audit(1769564562.512:1107): pid=7366 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:42.512000 audit[7366]: CRED_DISP pid=7366 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:42.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.88:22-10.0.0.1:46144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:42:43.338751 containerd[1612]: time="2026-01-28T01:42:43.338476941Z" level=info msg="container event discarded" container=d2ab8dd654be4139f5461f1432a080b825579e37ace95079cfff72b1099f0bfc type=CONTAINER_STARTED_EVENT Jan 28 01:42:43.424229 kubelet[2938]: E0128 01:42:43.424080 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-757b4d7df4-d9b2x" podUID="a02c8935-a477-4f1a-ba8e-1d2c1d76c8e7" Jan 28 01:42:43.891110 containerd[1612]: time="2026-01-28T01:42:43.891020443Z" level=info msg="container event discarded" container=d2ab8dd654be4139f5461f1432a080b825579e37ace95079cfff72b1099f0bfc type=CONTAINER_STOPPED_EVENT Jan 28 01:42:46.402062 kubelet[2938]: E0128 01:42:46.400244 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78b6655f44-dr84p" podUID="477c43dc-f740-4bfd-b59c-255fe52c8673" Jan 28 01:42:47.383137 kubelet[2938]: E0128 01:42:47.373721 2938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:42:47.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.88:22-10.0.0.1:45492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:42:47.556702 systemd[1]: Started sshd@40-10.0.0.88:22-10.0.0.1:45492.service - OpenSSH per-connection server daemon (10.0.0.1:45492). Jan 28 01:42:47.591661 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:42:47.591826 kernel: audit: type=1130 audit(1769564567.556:1109): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.88:22-10.0.0.1:45492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:42:47.920388 kernel: audit: type=1101 audit(1769564567.890:1110): pid=7384 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:47.890000 audit[7384]: USER_ACCT pid=7384 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:47.918147 sshd-session[7384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:42:47.921437 sshd[7384]: Accepted publickey for core from 10.0.0.1 port 45492 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:42:47.906000 audit[7384]: CRED_ACQ pid=7384 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:47.959524 kernel: audit: type=1103 audit(1769564567.906:1111): pid=7384 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:47.981153 systemd-logind[1594]: New session 41 of user core. Jan 28 01:42:47.990571 kernel: audit: type=1006 audit(1769564567.906:1112): pid=7384 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=41 res=1 Jan 28 01:42:48.001575 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 28 01:42:47.906000 audit[7384]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3e19e720 a2=3 a3=0 items=0 ppid=1 pid=7384 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=41 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:42:48.037358 kernel: audit: type=1300 audit(1769564567.906:1112): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3e19e720 a2=3 a3=0 items=0 ppid=1 pid=7384 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=41 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:42:47.906000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:42:48.077382 kernel: audit: type=1327 audit(1769564567.906:1112): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:42:48.060000 audit[7384]: USER_START pid=7384 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:48.112380 kernel: audit: type=1105 audit(1769564568.060:1113): pid=7384 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:48.065000 audit[7388]: CRED_ACQ pid=7388 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:48.186432 kernel: audit: type=1103 audit(1769564568.065:1114): pid=7388 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:49.085422 sshd[7388]: Connection closed by 10.0.0.1 port 45492 Jan 28 01:42:49.085976 sshd-session[7384]: pam_unix(sshd:session): session closed for user core Jan 28 01:42:49.092000 audit[7384]: USER_END pid=7384 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:49.138694 systemd-logind[1594]: Session 41 logged out. Waiting for processes to exit. Jan 28 01:42:49.159569 kernel: audit: type=1106 audit(1769564569.092:1115): pid=7384 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:49.092000 audit[7384]: CRED_DISP pid=7384 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:49.165804 systemd[1]: sshd@40-10.0.0.88:22-10.0.0.1:45492.service: Deactivated successfully. Jan 28 01:42:49.192678 systemd[1]: session-41.scope: Deactivated successfully. Jan 28 01:42:49.209477 kernel: audit: type=1104 audit(1769564569.092:1116): pid=7384 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:42:49.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.88:22-10.0.0.1:45492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:42:49.251749 systemd-logind[1594]: Removed session 41. Jan 28 01:42:49.432643 kubelet[2938]: E0128 01:42:49.432593 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcc88c58b-n2mcr" podUID="38764aa9-f6ea-4a8f-ac0e-198fa6f97144"