Jan 19 12:06:26.036215 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 09:38:41 -00 2026 Jan 19 12:06:26.036241 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b524184fc941b6143829d4e80d1854878d9df1f2d76dbdcda2c58f1abfc5daa1 Jan 19 12:06:26.036255 kernel: BIOS-provided physical RAM map: Jan 19 12:06:26.036267 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 19 12:06:26.036273 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 19 12:06:26.036279 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 19 12:06:26.036286 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 19 12:06:26.036292 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 19 12:06:26.036298 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 19 12:06:26.036304 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 19 12:06:26.036310 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 19 12:06:26.036318 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 19 12:06:26.036324 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 19 12:06:26.036330 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 19 12:06:26.036337 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 19 12:06:26.036344 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 19 12:06:26.036352 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 19 12:06:26.036359 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 19 12:06:26.036365 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 19 12:06:26.036371 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 19 12:06:26.036378 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 19 12:06:26.036384 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 19 12:06:26.036391 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 19 12:06:26.036397 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 19 12:06:26.036403 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 19 12:06:26.036410 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 19 12:06:26.036650 kernel: NX (Execute Disable) protection: active Jan 19 12:06:26.036658 kernel: APIC: Static calls initialized Jan 19 12:06:26.036665 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 19 12:06:26.036672 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 19 12:06:26.036678 kernel: extended physical RAM map: Jan 19 12:06:26.036685 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 19 12:06:26.036691 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 19 12:06:26.036698 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 19 12:06:26.036704 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 19 12:06:26.036711 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 19 12:06:26.036717 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 19 12:06:26.036727 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 19 12:06:26.036733 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 19 12:06:26.036740 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 19 12:06:26.036749 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 19 12:06:26.036758 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 19 12:06:26.036765 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 19 12:06:26.036772 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 19 12:06:26.036778 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 19 12:06:26.036785 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 19 12:06:26.036792 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 19 12:06:26.036799 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 19 12:06:26.036806 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 19 12:06:26.036812 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 19 12:06:26.036821 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 19 12:06:26.036828 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 19 12:06:26.036835 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 19 12:06:26.036841 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 19 12:06:26.036848 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 19 12:06:26.036855 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 19 12:06:26.036862 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 19 12:06:26.036868 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 19 12:06:26.036875 kernel: efi: EFI v2.7 by EDK II Jan 19 12:06:26.036885 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 19 12:06:26.036897 kernel: random: crng init done Jan 19 12:06:26.036913 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 19 12:06:26.036922 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 19 12:06:26.036928 kernel: secureboot: Secure boot disabled Jan 19 12:06:26.036935 kernel: SMBIOS 2.8 present. Jan 19 12:06:26.036942 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 19 12:06:26.036949 kernel: DMI: Memory slots populated: 1/1 Jan 19 12:06:26.036955 kernel: Hypervisor detected: KVM Jan 19 12:06:26.036962 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 19 12:06:26.036969 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 19 12:06:26.036976 kernel: kvm-clock: using sched offset of 8335115585 cycles Jan 19 12:06:26.036983 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 19 12:06:26.036993 kernel: tsc: Detected 2445.426 MHz processor Jan 19 12:06:26.037000 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 19 12:06:26.037007 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 19 12:06:26.037014 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 19 12:06:26.037021 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 19 12:06:26.037028 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 19 12:06:26.037035 kernel: Using GB pages for direct mapping Jan 19 12:06:26.037044 kernel: ACPI: Early table checksum verification disabled Jan 19 12:06:26.037051 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 19 12:06:26.037058 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 19 12:06:26.037065 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 19 12:06:26.037073 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 19 12:06:26.037080 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 19 12:06:26.037087 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 19 12:06:26.037096 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 19 12:06:26.037103 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 19 12:06:26.037110 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 19 12:06:26.037117 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 19 12:06:26.037124 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 19 12:06:26.037131 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 19 12:06:26.037143 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 19 12:06:26.037159 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 19 12:06:26.037171 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 19 12:06:26.037182 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 19 12:06:26.037189 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 19 12:06:26.037196 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 19 12:06:26.037203 kernel: No NUMA configuration found Jan 19 12:06:26.037210 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 19 12:06:26.037217 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 19 12:06:26.037226 kernel: Zone ranges: Jan 19 12:06:26.037234 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 19 12:06:26.037241 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 19 12:06:26.037248 kernel: Normal empty Jan 19 12:06:26.037255 kernel: Device empty Jan 19 12:06:26.037262 kernel: Movable zone start for each node Jan 19 12:06:26.037269 kernel: Early memory node ranges Jan 19 12:06:26.037275 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 19 12:06:26.037284 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 19 12:06:26.037292 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 19 12:06:26.037299 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 19 12:06:26.037306 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 19 12:06:26.037313 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 19 12:06:26.037320 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 19 12:06:26.037326 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 19 12:06:26.037335 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 19 12:06:26.037342 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 19 12:06:26.037355 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 19 12:06:26.037365 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 19 12:06:26.037372 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 19 12:06:26.037379 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 19 12:06:26.037386 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 19 12:06:26.037394 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 19 12:06:26.037408 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 19 12:06:26.037650 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 19 12:06:26.037662 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 19 12:06:26.037670 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 19 12:06:26.037677 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 19 12:06:26.037684 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 19 12:06:26.037694 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 19 12:06:26.037701 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 19 12:06:26.037709 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 19 12:06:26.037716 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 19 12:06:26.037723 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 19 12:06:26.037730 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 19 12:06:26.037738 kernel: TSC deadline timer available Jan 19 12:06:26.037747 kernel: CPU topo: Max. logical packages: 1 Jan 19 12:06:26.037754 kernel: CPU topo: Max. logical dies: 1 Jan 19 12:06:26.037761 kernel: CPU topo: Max. dies per package: 1 Jan 19 12:06:26.037768 kernel: CPU topo: Max. threads per core: 1 Jan 19 12:06:26.037775 kernel: CPU topo: Num. cores per package: 4 Jan 19 12:06:26.037783 kernel: CPU topo: Num. threads per package: 4 Jan 19 12:06:26.037796 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 19 12:06:26.037810 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 19 12:06:26.037821 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 19 12:06:26.037829 kernel: kvm-guest: setup PV sched yield Jan 19 12:06:26.037836 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 19 12:06:26.037843 kernel: Booting paravirtualized kernel on KVM Jan 19 12:06:26.037851 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 19 12:06:26.037858 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 19 12:06:26.037865 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 19 12:06:26.037875 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 19 12:06:26.037882 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 19 12:06:26.037889 kernel: kvm-guest: PV spinlocks enabled Jan 19 12:06:26.037897 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 19 12:06:26.037905 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b524184fc941b6143829d4e80d1854878d9df1f2d76dbdcda2c58f1abfc5daa1 Jan 19 12:06:26.037912 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 19 12:06:26.037922 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 19 12:06:26.037929 kernel: Fallback order for Node 0: 0 Jan 19 12:06:26.037936 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 19 12:06:26.037943 kernel: Policy zone: DMA32 Jan 19 12:06:26.037951 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 19 12:06:26.037958 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 19 12:06:26.037965 kernel: ftrace: allocating 40128 entries in 157 pages Jan 19 12:06:26.037972 kernel: ftrace: allocated 157 pages with 5 groups Jan 19 12:06:26.037982 kernel: Dynamic Preempt: voluntary Jan 19 12:06:26.037989 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 19 12:06:26.038000 kernel: rcu: RCU event tracing is enabled. Jan 19 12:06:26.038007 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 19 12:06:26.038015 kernel: Trampoline variant of Tasks RCU enabled. Jan 19 12:06:26.038022 kernel: Rude variant of Tasks RCU enabled. Jan 19 12:06:26.038030 kernel: Tracing variant of Tasks RCU enabled. Jan 19 12:06:26.038039 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 19 12:06:26.038050 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 19 12:06:26.038063 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 19 12:06:26.038076 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 19 12:06:26.038088 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 19 12:06:26.038095 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 19 12:06:26.038103 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 19 12:06:26.038113 kernel: Console: colour dummy device 80x25 Jan 19 12:06:26.038120 kernel: printk: legacy console [ttyS0] enabled Jan 19 12:06:26.038127 kernel: ACPI: Core revision 20240827 Jan 19 12:06:26.038135 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 19 12:06:26.038142 kernel: APIC: Switch to symmetric I/O mode setup Jan 19 12:06:26.038149 kernel: x2apic enabled Jan 19 12:06:26.038157 kernel: APIC: Switched APIC routing to: physical x2apic Jan 19 12:06:26.038164 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 19 12:06:26.038173 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 19 12:06:26.038180 kernel: kvm-guest: setup PV IPIs Jan 19 12:06:26.038188 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 19 12:06:26.038195 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 19 12:06:26.038203 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 19 12:06:26.038210 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 19 12:06:26.038217 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 19 12:06:26.038226 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 19 12:06:26.038234 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 19 12:06:26.038241 kernel: Spectre V2 : Mitigation: Retpolines Jan 19 12:06:26.038249 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 19 12:06:26.038256 kernel: Speculative Store Bypass: Vulnerable Jan 19 12:06:26.038263 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 19 12:06:26.038273 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 19 12:06:26.038280 kernel: active return thunk: srso_alias_return_thunk Jan 19 12:06:26.038288 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 19 12:06:26.038295 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 19 12:06:26.038302 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 19 12:06:26.038316 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 19 12:06:26.038330 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 19 12:06:26.038346 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 19 12:06:26.038356 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 19 12:06:26.038364 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 19 12:06:26.038371 kernel: Freeing SMP alternatives memory: 32K Jan 19 12:06:26.038378 kernel: pid_max: default: 32768 minimum: 301 Jan 19 12:06:26.038386 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 19 12:06:26.038393 kernel: landlock: Up and running. Jan 19 12:06:26.038402 kernel: SELinux: Initializing. Jan 19 12:06:26.038410 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 19 12:06:26.038635 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 19 12:06:26.038644 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 19 12:06:26.038651 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 19 12:06:26.038659 kernel: signal: max sigframe size: 1776 Jan 19 12:06:26.038666 kernel: rcu: Hierarchical SRCU implementation. Jan 19 12:06:26.038676 kernel: rcu: Max phase no-delay instances is 400. Jan 19 12:06:26.038684 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 19 12:06:26.038691 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 19 12:06:26.038698 kernel: smp: Bringing up secondary CPUs ... Jan 19 12:06:26.038706 kernel: smpboot: x86: Booting SMP configuration: Jan 19 12:06:26.038713 kernel: .... node #0, CPUs: #1 #2 #3 Jan 19 12:06:26.038720 kernel: smp: Brought up 1 node, 4 CPUs Jan 19 12:06:26.038728 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 19 12:06:26.038738 kernel: Memory: 2439052K/2565800K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15540K init, 2496K bss, 120812K reserved, 0K cma-reserved) Jan 19 12:06:26.038745 kernel: devtmpfs: initialized Jan 19 12:06:26.038752 kernel: x86/mm: Memory block size: 128MB Jan 19 12:06:26.038760 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 19 12:06:26.038767 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 19 12:06:26.038774 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 19 12:06:26.038782 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 19 12:06:26.038797 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 19 12:06:26.038810 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 19 12:06:26.038820 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 19 12:06:26.038827 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 19 12:06:26.038834 kernel: pinctrl core: initialized pinctrl subsystem Jan 19 12:06:26.038842 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 19 12:06:26.038851 kernel: audit: initializing netlink subsys (disabled) Jan 19 12:06:26.038859 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 19 12:06:26.038866 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 19 12:06:26.038873 kernel: audit: type=2000 audit(1768824375.783:1): state=initialized audit_enabled=0 res=1 Jan 19 12:06:26.038881 kernel: cpuidle: using governor menu Jan 19 12:06:26.038888 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 19 12:06:26.038895 kernel: dca service started, version 1.12.1 Jan 19 12:06:26.038903 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 19 12:06:26.038912 kernel: PCI: Using configuration type 1 for base access Jan 19 12:06:26.038919 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 19 12:06:26.038927 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 19 12:06:26.038934 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 19 12:06:26.038941 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 19 12:06:26.038948 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 19 12:06:26.038956 kernel: ACPI: Added _OSI(Module Device) Jan 19 12:06:26.038965 kernel: ACPI: Added _OSI(Processor Device) Jan 19 12:06:26.038972 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 19 12:06:26.038979 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 19 12:06:26.038987 kernel: ACPI: Interpreter enabled Jan 19 12:06:26.038994 kernel: ACPI: PM: (supports S0 S3 S5) Jan 19 12:06:26.039001 kernel: ACPI: Using IOAPIC for interrupt routing Jan 19 12:06:26.039008 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 19 12:06:26.039018 kernel: PCI: Using E820 reservations for host bridge windows Jan 19 12:06:26.039025 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 19 12:06:26.039032 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 19 12:06:26.039270 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 19 12:06:26.039792 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 19 12:06:26.039978 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 19 12:06:26.040000 kernel: PCI host bridge to bus 0000:00 Jan 19 12:06:26.040176 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 19 12:06:26.040350 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 19 12:06:26.040760 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 19 12:06:26.040936 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 19 12:06:26.041100 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 19 12:06:26.041271 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 19 12:06:26.041746 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 19 12:06:26.041941 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 19 12:06:26.042119 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 19 12:06:26.042293 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 19 12:06:26.045702 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 19 12:06:26.045884 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 19 12:06:26.046053 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 19 12:06:26.046220 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 20507 usecs Jan 19 12:06:26.046398 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 19 12:06:26.046811 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 19 12:06:26.046983 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 19 12:06:26.047161 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 19 12:06:26.047354 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 19 12:06:26.047771 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 19 12:06:26.047946 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 19 12:06:26.048118 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 19 12:06:26.048292 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 19 12:06:26.048684 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 19 12:06:26.048857 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 19 12:06:26.049098 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 19 12:06:26.049293 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 19 12:06:26.049859 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 19 12:06:26.050055 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 19 12:06:26.050245 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 17578 usecs Jan 19 12:06:26.050696 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 19 12:06:26.050878 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 19 12:06:26.051082 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 19 12:06:26.051282 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 19 12:06:26.051713 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 19 12:06:26.051726 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 19 12:06:26.051734 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 19 12:06:26.051742 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 19 12:06:26.051753 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 19 12:06:26.051760 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 19 12:06:26.051768 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 19 12:06:26.051775 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 19 12:06:26.051782 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 19 12:06:26.051790 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 19 12:06:26.051797 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 19 12:06:26.051804 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 19 12:06:26.051814 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 19 12:06:26.051825 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 19 12:06:26.051838 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 19 12:06:26.051850 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 19 12:06:26.051857 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 19 12:06:26.051864 kernel: iommu: Default domain type: Translated Jan 19 12:06:26.051872 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 19 12:06:26.051881 kernel: efivars: Registered efivars operations Jan 19 12:06:26.051889 kernel: PCI: Using ACPI for IRQ routing Jan 19 12:06:26.051896 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 19 12:06:26.051904 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 19 12:06:26.051911 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 19 12:06:26.051918 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 19 12:06:26.051925 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 19 12:06:26.051935 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 19 12:06:26.051942 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 19 12:06:26.051949 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 19 12:06:26.051957 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 19 12:06:26.052143 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 19 12:06:26.052336 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 19 12:06:26.052764 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 19 12:06:26.052778 kernel: vgaarb: loaded Jan 19 12:06:26.052786 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 19 12:06:26.052794 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 19 12:06:26.052801 kernel: clocksource: Switched to clocksource kvm-clock Jan 19 12:06:26.052808 kernel: VFS: Disk quotas dquot_6.6.0 Jan 19 12:06:26.052816 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 19 12:06:26.052823 kernel: pnp: PnP ACPI init Jan 19 12:06:26.053025 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 19 12:06:26.053038 kernel: pnp: PnP ACPI: found 6 devices Jan 19 12:06:26.053045 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 19 12:06:26.053053 kernel: NET: Registered PF_INET protocol family Jan 19 12:06:26.053061 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 19 12:06:26.053068 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 19 12:06:26.053103 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 19 12:06:26.053113 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 19 12:06:26.053121 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 19 12:06:26.053129 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 19 12:06:26.053137 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 19 12:06:26.053145 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 19 12:06:26.053152 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 19 12:06:26.053162 kernel: NET: Registered PF_XDP protocol family Jan 19 12:06:26.053348 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 19 12:06:26.053776 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 19 12:06:26.053953 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 19 12:06:26.054128 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 19 12:06:26.054285 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 19 12:06:26.054757 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 19 12:06:26.054923 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 19 12:06:26.055097 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 19 12:06:26.055109 kernel: PCI: CLS 0 bytes, default 64 Jan 19 12:06:26.055117 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 19 12:06:26.055125 kernel: Initialise system trusted keyrings Jan 19 12:06:26.055133 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 19 12:06:26.055144 kernel: Key type asymmetric registered Jan 19 12:06:26.055152 kernel: Asymmetric key parser 'x509' registered Jan 19 12:06:26.055159 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 19 12:06:26.055167 kernel: io scheduler mq-deadline registered Jan 19 12:06:26.055174 kernel: io scheduler kyber registered Jan 19 12:06:26.055185 kernel: io scheduler bfq registered Jan 19 12:06:26.055199 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 19 12:06:26.055218 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 19 12:06:26.055237 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 19 12:06:26.055246 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 19 12:06:26.055254 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 19 12:06:26.055262 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 19 12:06:26.055272 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 19 12:06:26.055279 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 19 12:06:26.055287 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 19 12:06:26.057730 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 19 12:06:26.057747 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 19 12:06:26.057934 kernel: rtc_cmos 00:04: registered as rtc0 Jan 19 12:06:26.058123 kernel: rtc_cmos 00:04: setting system clock to 2026-01-19T12:06:21 UTC (1768824381) Jan 19 12:06:26.058310 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 19 12:06:26.058325 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 19 12:06:26.058336 kernel: efifb: probing for efifb Jan 19 12:06:26.058344 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 19 12:06:26.058352 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 19 12:06:26.058360 kernel: efifb: scrolling: redraw Jan 19 12:06:26.058370 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 19 12:06:26.058377 kernel: Console: switching to colour frame buffer device 160x50 Jan 19 12:06:26.058385 kernel: fb0: EFI VGA frame buffer device Jan 19 12:06:26.058393 kernel: pstore: Using crash dump compression: deflate Jan 19 12:06:26.058401 kernel: pstore: Registered efi_pstore as persistent store backend Jan 19 12:06:26.058408 kernel: NET: Registered PF_INET6 protocol family Jan 19 12:06:26.058648 kernel: Segment Routing with IPv6 Jan 19 12:06:26.058662 kernel: In-situ OAM (IOAM) with IPv6 Jan 19 12:06:26.058670 kernel: NET: Registered PF_PACKET protocol family Jan 19 12:06:26.058678 kernel: Key type dns_resolver registered Jan 19 12:06:26.058686 kernel: IPI shorthand broadcast: enabled Jan 19 12:06:26.058694 kernel: sched_clock: Marking stable (5537120550, 1120884357)->(7527462552, -869457645) Jan 19 12:06:26.058701 kernel: registered taskstats version 1 Jan 19 12:06:26.058709 kernel: Loading compiled-in X.509 certificates Jan 19 12:06:26.058717 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ba909111c102256a4abe14f4fc03cb5c21d9fa72' Jan 19 12:06:26.058732 kernel: Demotion targets for Node 0: null Jan 19 12:06:26.058745 kernel: Key type .fscrypt registered Jan 19 12:06:26.058754 kernel: Key type fscrypt-provisioning registered Jan 19 12:06:26.058762 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 19 12:06:26.058770 kernel: ima: Allocated hash algorithm: sha1 Jan 19 12:06:26.058777 kernel: ima: No architecture policies found Jan 19 12:06:26.058787 kernel: clk: Disabling unused clocks Jan 19 12:06:26.058903 kernel: Freeing unused kernel image (initmem) memory: 15540K Jan 19 12:06:26.058918 kernel: Write protecting the kernel read-only data: 47104k Jan 19 12:06:26.058928 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 19 12:06:26.058936 kernel: Run /init as init process Jan 19 12:06:26.058944 kernel: with arguments: Jan 19 12:06:26.058951 kernel: /init Jan 19 12:06:26.058959 kernel: with environment: Jan 19 12:06:26.058970 kernel: HOME=/ Jan 19 12:06:26.058977 kernel: TERM=linux Jan 19 12:06:26.058985 kernel: SCSI subsystem initialized Jan 19 12:06:26.058992 kernel: libata version 3.00 loaded. Jan 19 12:06:26.059253 kernel: ahci 0000:00:1f.2: version 3.0 Jan 19 12:06:26.059268 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 19 12:06:26.059699 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 19 12:06:26.059880 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 19 12:06:26.060050 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 19 12:06:26.060240 kernel: scsi host0: ahci Jan 19 12:06:26.060753 kernel: scsi host1: ahci Jan 19 12:06:26.060947 kernel: scsi host2: ahci Jan 19 12:06:26.061133 kernel: scsi host3: ahci Jan 19 12:06:26.061313 kernel: scsi host4: ahci Jan 19 12:06:26.061732 kernel: scsi host5: ahci Jan 19 12:06:26.061746 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Jan 19 12:06:26.061755 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Jan 19 12:06:26.061763 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Jan 19 12:06:26.061776 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Jan 19 12:06:26.061784 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Jan 19 12:06:26.061792 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Jan 19 12:06:26.061800 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 19 12:06:26.061808 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 19 12:06:26.061816 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 19 12:06:26.061824 kernel: ata3.00: LPM support broken, forcing max_power Jan 19 12:06:26.061833 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 19 12:06:26.061841 kernel: ata3.00: applying bridge limits Jan 19 12:06:26.061849 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 19 12:06:26.061857 kernel: ata3.00: LPM support broken, forcing max_power Jan 19 12:06:26.061864 kernel: ata3.00: configured for UDMA/100 Jan 19 12:06:26.061873 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 19 12:06:26.062072 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 19 12:06:26.062086 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 19 12:06:26.062268 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 19 12:06:26.062673 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 19 12:06:26.062686 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 19 12:06:26.062874 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 19 12:06:26.062884 kernel: GPT:16515071 != 27000831 Jan 19 12:06:26.062896 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 19 12:06:26.062904 kernel: GPT:16515071 != 27000831 Jan 19 12:06:26.062911 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 19 12:06:26.062919 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 19 12:06:26.062927 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 19 12:06:26.063110 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 19 12:06:26.063121 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 19 12:06:26.063131 kernel: device-mapper: uevent: version 1.0.3 Jan 19 12:06:26.063139 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 19 12:06:26.063147 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 19 12:06:26.063154 kernel: raid6: avx2x4 gen() 31873 MB/s Jan 19 12:06:26.063162 kernel: raid6: avx2x2 gen() 29951 MB/s Jan 19 12:06:26.063170 kernel: raid6: avx2x1 gen() 23652 MB/s Jan 19 12:06:26.063177 kernel: raid6: using algorithm avx2x4 gen() 31873 MB/s Jan 19 12:06:26.063187 kernel: raid6: .... xor() 4711 MB/s, rmw enabled Jan 19 12:06:26.063195 kernel: raid6: using avx2x2 recovery algorithm Jan 19 12:06:26.063203 kernel: xor: automatically using best checksumming function avx Jan 19 12:06:26.063210 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 19 12:06:26.063218 kernel: BTRFS: device fsid 163044fe-e6e3-4007-9021-e65918f0e7ac devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (181) Jan 19 12:06:26.063226 kernel: BTRFS info (device dm-0): first mount of filesystem 163044fe-e6e3-4007-9021-e65918f0e7ac Jan 19 12:06:26.063233 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 19 12:06:26.063243 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 19 12:06:26.063251 kernel: BTRFS info (device dm-0): enabling free space tree Jan 19 12:06:26.063258 kernel: loop: module loaded Jan 19 12:06:26.063266 kernel: loop0: detected capacity change from 0 to 100552 Jan 19 12:06:26.063273 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 19 12:06:26.063282 systemd[1]: Successfully made /usr/ read-only. Jan 19 12:06:26.063292 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 19 12:06:26.063302 systemd[1]: Detected virtualization kvm. Jan 19 12:06:26.063310 systemd[1]: Detected architecture x86-64. Jan 19 12:06:26.063318 systemd[1]: Running in initrd. Jan 19 12:06:26.063326 systemd[1]: No hostname configured, using default hostname. Jan 19 12:06:26.063334 systemd[1]: Hostname set to . Jan 19 12:06:26.063341 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 19 12:06:26.063351 systemd[1]: Queued start job for default target initrd.target. Jan 19 12:06:26.063359 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 19 12:06:26.063367 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 19 12:06:26.063375 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 19 12:06:26.063384 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 19 12:06:26.063394 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 19 12:06:26.063405 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 19 12:06:26.063413 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 19 12:06:26.064695 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 19 12:06:26.064709 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 19 12:06:26.064721 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 19 12:06:26.064737 systemd[1]: Reached target paths.target - Path Units. Jan 19 12:06:26.064757 systemd[1]: Reached target slices.target - Slice Units. Jan 19 12:06:26.064768 systemd[1]: Reached target swap.target - Swaps. Jan 19 12:06:26.064776 systemd[1]: Reached target timers.target - Timer Units. Jan 19 12:06:26.064784 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 19 12:06:26.064792 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 19 12:06:26.064800 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 19 12:06:26.064812 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 19 12:06:26.064823 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 19 12:06:26.064831 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 19 12:06:26.064839 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 19 12:06:26.064847 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 19 12:06:26.064854 systemd[1]: Reached target sockets.target - Socket Units. Jan 19 12:06:26.064863 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 19 12:06:26.064873 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 19 12:06:26.064881 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 19 12:06:26.064889 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 19 12:06:26.064897 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 19 12:06:26.064908 systemd[1]: Starting systemd-fsck-usr.service... Jan 19 12:06:26.064922 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 19 12:06:26.064936 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 19 12:06:26.064947 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 19 12:06:26.064956 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 19 12:06:26.064990 systemd-journald[319]: Collecting audit messages is enabled. Jan 19 12:06:26.065012 kernel: audit: type=1130 audit(1768824386.029:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:26.065021 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 19 12:06:26.065029 systemd-journald[319]: Journal started Jan 19 12:06:26.065051 systemd-journald[319]: Runtime Journal (/run/log/journal/06c5738ad16d42cc946ca2e3a2030fb9) is 6M, max 48M, 42M free. Jan 19 12:06:26.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:26.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:26.104718 systemd[1]: Started systemd-journald.service - Journal Service. Jan 19 12:06:26.104747 kernel: audit: type=1130 audit(1768824386.092:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:26.144638 systemd[1]: Finished systemd-fsck-usr.service. Jan 19 12:06:26.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:26.192412 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 19 12:06:26.312311 kernel: audit: type=1130 audit(1768824386.142:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:26.312344 kernel: audit: type=1130 audit(1768824386.188:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:26.312356 kernel: audit: type=1130 audit(1768824386.269:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:26.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:26.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:26.275247 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 19 12:06:26.330058 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 19 12:06:26.368095 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 19 12:06:26.452792 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 19 12:06:26.471637 kernel: Bridge firewalling registered Jan 19 12:06:26.471276 systemd-modules-load[321]: Inserted module 'br_netfilter' Jan 19 12:06:26.473158 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 19 12:06:26.481221 systemd-tmpfiles[332]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 19 12:06:26.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:26.527028 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 19 12:06:26.560004 kernel: audit: type=1130 audit(1768824386.525:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:26.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:26.623657 kernel: audit: type=1130 audit(1768824386.592:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:26.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:26.623743 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 19 12:06:26.689199 kernel: audit: type=1130 audit(1768824386.640:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:26.669900 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 19 12:06:26.750845 kernel: audit: type=1130 audit(1768824386.705:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:26.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:26.739124 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 19 12:06:26.777107 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 19 12:06:26.807750 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 19 12:06:26.864997 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 19 12:06:26.929068 kernel: audit: type=1130 audit(1768824386.881:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:26.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:26.909999 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 19 12:06:26.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:26.947000 audit: BPF prog-id=6 op=LOAD Jan 19 12:06:26.961976 dracut-cmdline[349]: dracut-109 Jan 19 12:06:26.961976 dracut-cmdline[349]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b524184fc941b6143829d4e80d1854878d9df1f2d76dbdcda2c58f1abfc5daa1 Jan 19 12:06:26.949160 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 19 12:06:27.166344 systemd-resolved[378]: Positive Trust Anchors: Jan 19 12:06:27.166811 systemd-resolved[378]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 19 12:06:27.166817 systemd-resolved[378]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 19 12:06:27.166842 systemd-resolved[378]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 19 12:06:27.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:27.204285 systemd-resolved[378]: Defaulting to hostname 'linux'. Jan 19 12:06:27.205890 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 19 12:06:27.218814 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 19 12:06:27.586773 kernel: Loading iSCSI transport class v2.0-870. Jan 19 12:06:27.630840 kernel: iscsi: registered transport (tcp) Jan 19 12:06:27.680067 kernel: iscsi: registered transport (qla4xxx) Jan 19 12:06:27.680127 kernel: QLogic iSCSI HBA Driver Jan 19 12:06:27.767377 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 19 12:06:27.844759 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 19 12:06:27.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:27.849995 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 19 12:06:27.990833 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 19 12:06:27.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:27.994396 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 19 12:06:28.054078 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 19 12:06:28.160251 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 19 12:06:28.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:28.178000 audit: BPF prog-id=7 op=LOAD Jan 19 12:06:28.179000 audit: BPF prog-id=8 op=LOAD Jan 19 12:06:28.181023 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 19 12:06:28.270288 systemd-udevd[580]: Using default interface naming scheme 'v257'. Jan 19 12:06:28.312953 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 19 12:06:28.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:28.336796 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 19 12:06:28.445015 dracut-pre-trigger[626]: rd.md=0: removing MD RAID activation Jan 19 12:06:28.585702 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 19 12:06:28.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:28.625070 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 19 12:06:28.670378 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 19 12:06:28.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:28.689000 audit: BPF prog-id=9 op=LOAD Jan 19 12:06:28.691915 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 19 12:06:28.803022 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 19 12:06:28.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:28.830264 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 19 12:06:28.879400 systemd-networkd[727]: lo: Link UP Jan 19 12:06:28.880891 systemd-networkd[727]: lo: Gained carrier Jan 19 12:06:28.887334 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 19 12:06:28.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:28.924905 systemd[1]: Reached target network.target - Network. Jan 19 12:06:29.000390 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 19 12:06:29.052211 kernel: cryptd: max_cpu_qlen set to 1000 Jan 19 12:06:29.114802 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 19 12:06:29.151136 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 19 12:06:29.193122 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 19 12:06:29.298915 kernel: AES CTR mode by8 optimization enabled Jan 19 12:06:29.259117 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 19 12:06:29.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:29.281384 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 19 12:06:29.281851 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 19 12:06:29.303043 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 19 12:06:29.326024 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 19 12:06:29.441933 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 19 12:06:29.334191 systemd-networkd[727]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 19 12:06:29.334196 systemd-networkd[727]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 19 12:06:29.502205 disk-uuid[828]: Primary Header is updated. Jan 19 12:06:29.502205 disk-uuid[828]: Secondary Entries is updated. Jan 19 12:06:29.502205 disk-uuid[828]: Secondary Header is updated. Jan 19 12:06:29.335042 systemd-networkd[727]: eth0: Link UP Jan 19 12:06:29.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:29.336747 systemd-networkd[727]: eth0: Gained carrier Jan 19 12:06:29.336757 systemd-networkd[727]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 19 12:06:29.519960 systemd-networkd[727]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 19 12:06:29.570931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 19 12:06:29.730378 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 19 12:06:29.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:29.766096 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 19 12:06:29.774658 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 19 12:06:29.803252 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 19 12:06:29.870318 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 19 12:06:29.958743 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 19 12:06:29.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:30.462237 systemd-networkd[727]: eth0: Gained IPv6LL Jan 19 12:06:30.548763 disk-uuid[829]: Warning: The kernel is still using the old partition table. Jan 19 12:06:30.548763 disk-uuid[829]: The new table will be used at the next reboot or after you Jan 19 12:06:30.548763 disk-uuid[829]: run partprobe(8) or kpartx(8) Jan 19 12:06:30.548763 disk-uuid[829]: The operation has completed successfully. Jan 19 12:06:30.613224 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 19 12:06:30.613806 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 19 12:06:30.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:30.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:30.640711 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 19 12:06:30.773071 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (863) Jan 19 12:06:30.802858 kernel: BTRFS info (device vda6): first mount of filesystem b6ab243a-3f7a-4aec-9347-0a1cbc843af6 Jan 19 12:06:30.802930 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 19 12:06:30.856018 kernel: BTRFS info (device vda6): turning on async discard Jan 19 12:06:30.856201 kernel: BTRFS info (device vda6): enabling free space tree Jan 19 12:06:30.890967 kernel: BTRFS info (device vda6): last unmount of filesystem b6ab243a-3f7a-4aec-9347-0a1cbc843af6 Jan 19 12:06:30.900189 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 19 12:06:30.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:30.928753 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 19 12:06:31.152936 ignition[882]: Ignition 2.24.0 Jan 19 12:06:31.153372 ignition[882]: Stage: fetch-offline Jan 19 12:06:31.153627 ignition[882]: no configs at "/usr/lib/ignition/base.d" Jan 19 12:06:31.153642 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 19 12:06:31.153722 ignition[882]: parsed url from cmdline: "" Jan 19 12:06:31.153726 ignition[882]: no config URL provided Jan 19 12:06:31.153731 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Jan 19 12:06:31.153741 ignition[882]: no config at "/usr/lib/ignition/user.ign" Jan 19 12:06:31.153778 ignition[882]: op(1): [started] loading QEMU firmware config module Jan 19 12:06:31.153783 ignition[882]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 19 12:06:31.282131 ignition[882]: op(1): [finished] loading QEMU firmware config module Jan 19 12:06:32.784390 ignition[882]: parsing config with SHA512: 476f69cb6aa513f1ea27540f010a087185fd77de92fe772b6f8cb18f7f374c5c1569e8b4328b83a482608a9ef4affd142ca057dacecbd7c5784ce4ba216bad37 Jan 19 12:06:32.865951 unknown[882]: fetched base config from "system" Jan 19 12:06:32.865968 unknown[882]: fetched user config from "qemu" Jan 19 12:06:32.867232 ignition[882]: fetch-offline: fetch-offline passed Jan 19 12:06:32.867311 ignition[882]: Ignition finished successfully Jan 19 12:06:32.918795 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 19 12:06:32.998034 kernel: kauditd_printk_skb: 21 callbacks suppressed Jan 19 12:06:32.998066 kernel: audit: type=1130 audit(1768824392.936:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:32.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:32.937798 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 19 12:06:32.940683 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 19 12:06:33.156399 ignition[892]: Ignition 2.24.0 Jan 19 12:06:33.157148 ignition[892]: Stage: kargs Jan 19 12:06:33.157754 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jan 19 12:06:33.157769 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 19 12:06:33.159023 ignition[892]: kargs: kargs passed Jan 19 12:06:33.159075 ignition[892]: Ignition finished successfully Jan 19 12:06:33.225835 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 19 12:06:33.290980 kernel: audit: type=1130 audit(1768824393.240:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:33.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:33.244138 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 19 12:06:33.426127 ignition[900]: Ignition 2.24.0 Jan 19 12:06:33.426137 ignition[900]: Stage: disks Jan 19 12:06:33.432851 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 19 12:06:33.512783 kernel: audit: type=1130 audit(1768824393.459:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:33.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:33.427283 ignition[900]: no configs at "/usr/lib/ignition/base.d" Jan 19 12:06:33.461188 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 19 12:06:33.427294 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 19 12:06:33.531401 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 19 12:06:33.428803 ignition[900]: disks: disks passed Jan 19 12:06:33.561385 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 19 12:06:33.428865 ignition[900]: Ignition finished successfully Jan 19 12:06:33.576278 systemd[1]: Reached target sysinit.target - System Initialization. Jan 19 12:06:33.609396 systemd[1]: Reached target basic.target - Basic System. Jan 19 12:06:33.644411 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 19 12:06:33.833088 systemd-fsck[910]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 19 12:06:33.861658 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 19 12:06:33.936316 kernel: audit: type=1130 audit(1768824393.861:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:33.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:33.866235 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 19 12:06:34.454988 kernel: EXT4-fs (vda9): mounted filesystem 94229029-29b7-42b8-a135-4530ccb5ed34 r/w with ordered data mode. Quota mode: none. Jan 19 12:06:34.456356 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 19 12:06:34.470712 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 19 12:06:34.501080 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 19 12:06:34.550353 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 19 12:06:34.592204 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (918) Jan 19 12:06:34.563182 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 19 12:06:34.563230 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 19 12:06:34.563263 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 19 12:06:34.611368 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 19 12:06:34.638310 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 19 12:06:34.763305 kernel: BTRFS info (device vda6): first mount of filesystem b6ab243a-3f7a-4aec-9347-0a1cbc843af6 Jan 19 12:06:34.763341 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 19 12:06:34.810946 kernel: BTRFS info (device vda6): turning on async discard Jan 19 12:06:34.811005 kernel: BTRFS info (device vda6): enabling free space tree Jan 19 12:06:34.814709 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 19 12:06:35.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:35.307896 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 19 12:06:35.386770 kernel: audit: type=1130 audit(1768824395.325:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:35.328954 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 19 12:06:35.405412 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 19 12:06:35.487218 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 19 12:06:35.522323 kernel: BTRFS info (device vda6): last unmount of filesystem b6ab243a-3f7a-4aec-9347-0a1cbc843af6 Jan 19 12:06:35.534154 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 19 12:06:35.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:35.584998 kernel: audit: type=1130 audit(1768824395.547:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:35.721866 ignition[1018]: INFO : Ignition 2.24.0 Jan 19 12:06:35.735007 ignition[1018]: INFO : Stage: mount Jan 19 12:06:35.735007 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 19 12:06:35.735007 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 19 12:06:35.735007 ignition[1018]: INFO : mount: mount passed Jan 19 12:06:35.735007 ignition[1018]: INFO : Ignition finished successfully Jan 19 12:06:35.808327 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 19 12:06:35.874994 kernel: audit: type=1130 audit(1768824395.808:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:35.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:35.812277 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 19 12:06:35.949867 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 19 12:06:36.020179 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1029) Jan 19 12:06:36.049946 kernel: BTRFS info (device vda6): first mount of filesystem b6ab243a-3f7a-4aec-9347-0a1cbc843af6 Jan 19 12:06:36.050024 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 19 12:06:36.104232 kernel: BTRFS info (device vda6): turning on async discard Jan 19 12:06:36.104302 kernel: BTRFS info (device vda6): enabling free space tree Jan 19 12:06:36.109964 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 19 12:06:36.296971 ignition[1046]: INFO : Ignition 2.24.0 Jan 19 12:06:36.312009 ignition[1046]: INFO : Stage: files Jan 19 12:06:36.312009 ignition[1046]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 19 12:06:36.312009 ignition[1046]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 19 12:06:36.312009 ignition[1046]: DEBUG : files: compiled without relabeling support, skipping Jan 19 12:06:36.312009 ignition[1046]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 19 12:06:36.312009 ignition[1046]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 19 12:06:36.442312 ignition[1046]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 19 12:06:36.442312 ignition[1046]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 19 12:06:36.442312 ignition[1046]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 19 12:06:36.442312 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 19 12:06:36.442312 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 19 12:06:36.323076 unknown[1046]: wrote ssh authorized keys file for user: core Jan 19 12:06:36.750914 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 19 12:06:36.865320 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 19 12:06:36.865320 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 19 12:06:36.926294 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 19 12:06:36.926294 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 19 12:06:36.926294 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 19 12:06:36.926294 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 19 12:06:36.926294 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 19 12:06:36.926294 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 19 12:06:36.926294 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 19 12:06:36.926294 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 19 12:06:36.926294 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 19 12:06:36.926294 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 19 12:06:36.926294 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 19 12:06:36.926294 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 19 12:06:36.926294 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 19 12:06:37.606797 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 19 12:06:41.711929 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 19 12:06:41.711929 ignition[1046]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 19 12:06:41.765979 ignition[1046]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 19 12:06:41.833109 ignition[1046]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 19 12:06:41.833109 ignition[1046]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 19 12:06:41.833109 ignition[1046]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 19 12:06:41.833109 ignition[1046]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 19 12:06:41.862173 ignition[1046]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 19 12:06:41.862173 ignition[1046]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 19 12:06:41.862173 ignition[1046]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 19 12:06:41.993096 ignition[1046]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 19 12:06:42.034060 ignition[1046]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 19 12:06:42.034060 ignition[1046]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 19 12:06:42.034060 ignition[1046]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 19 12:06:42.034060 ignition[1046]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 19 12:06:42.034060 ignition[1046]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 19 12:06:42.034060 ignition[1046]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 19 12:06:42.034060 ignition[1046]: INFO : files: files passed Jan 19 12:06:42.034060 ignition[1046]: INFO : Ignition finished successfully Jan 19 12:06:42.203116 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 19 12:06:42.272187 kernel: audit: type=1130 audit(1768824402.219:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:42.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:42.225104 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 19 12:06:42.329148 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 19 12:06:42.352292 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 19 12:06:42.465098 kernel: audit: type=1130 audit(1768824402.363:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:42.465130 kernel: audit: type=1131 audit(1768824402.363:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:42.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:42.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:42.352996 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 19 12:06:42.494272 initrd-setup-root-after-ignition[1077]: grep: /sysroot/oem/oem-release: No such file or directory Jan 19 12:06:42.533238 initrd-setup-root-after-ignition[1080]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 19 12:06:42.533238 initrd-setup-root-after-ignition[1080]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 19 12:06:42.583310 initrd-setup-root-after-ignition[1084]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 19 12:06:42.617873 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 19 12:06:42.700046 kernel: audit: type=1130 audit(1768824402.638:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:42.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:42.640085 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 19 12:06:42.721150 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 19 12:06:42.925031 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 19 12:06:42.925382 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 19 12:06:43.036883 kernel: audit: type=1130 audit(1768824402.946:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:43.036916 kernel: audit: type=1131 audit(1768824402.948:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:42.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:42.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:42.949012 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 19 12:06:43.043188 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 19 12:06:43.107385 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 19 12:06:43.132029 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 19 12:06:43.253139 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 19 12:06:43.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:43.297139 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 19 12:06:43.365150 kernel: audit: type=1130 audit(1768824403.291:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:43.404379 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 19 12:06:43.405092 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 19 12:06:43.441994 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 19 12:06:43.478923 systemd[1]: Stopped target timers.target - Timer Units. Jan 19 12:06:43.480005 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 19 12:06:43.624338 kernel: audit: type=1131 audit(1768824403.535:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:43.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:43.480308 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 19 12:06:43.537042 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 19 12:06:43.590394 systemd[1]: Stopped target basic.target - Basic System. Jan 19 12:06:43.634230 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 19 12:06:43.664297 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 19 12:06:43.699200 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 19 12:06:43.731997 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 19 12:06:43.757135 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 19 12:06:43.863241 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 19 12:06:43.864179 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 19 12:06:43.938198 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 19 12:06:43.951958 systemd[1]: Stopped target swap.target - Swaps. Jan 19 12:06:43.984376 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 19 12:06:44.066116 kernel: audit: type=1131 audit(1768824403.998:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:43.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:43.986113 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 19 12:06:44.067057 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 19 12:06:44.082248 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 19 12:06:44.153067 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 19 12:06:44.155055 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 19 12:06:44.210105 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 19 12:06:44.304029 kernel: audit: type=1131 audit(1768824404.227:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:44.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:44.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:44.210265 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 19 12:06:44.228934 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 19 12:06:44.229194 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 19 12:06:44.289039 systemd[1]: Stopped target paths.target - Path Units. Jan 19 12:06:44.319296 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 19 12:06:44.320378 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 19 12:06:44.336350 systemd[1]: Stopped target slices.target - Slice Units. Jan 19 12:06:44.375013 systemd[1]: Stopped target sockets.target - Socket Units. Jan 19 12:06:44.414940 systemd[1]: iscsid.socket: Deactivated successfully. Jan 19 12:06:44.415076 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 19 12:06:44.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:44.434243 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 19 12:06:44.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:44.434909 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 19 12:06:44.462367 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 19 12:06:44.463040 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 19 12:06:44.498307 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 19 12:06:44.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:44.498965 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 19 12:06:44.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:44.733060 ignition[1104]: INFO : Ignition 2.24.0 Jan 19 12:06:44.733060 ignition[1104]: INFO : Stage: umount Jan 19 12:06:44.733060 ignition[1104]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 19 12:06:44.733060 ignition[1104]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 19 12:06:44.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:44.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:44.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:44.568308 systemd[1]: ignition-files.service: Deactivated successfully. Jan 19 12:06:44.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:44.854002 ignition[1104]: INFO : umount: umount passed Jan 19 12:06:44.854002 ignition[1104]: INFO : Ignition finished successfully Jan 19 12:06:44.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:44.568971 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 19 12:06:44.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:44.590367 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 19 12:06:44.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:44.610857 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 19 12:06:44.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:44.660363 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 19 12:06:44.661331 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 19 12:06:44.684098 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 19 12:06:44.684263 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 19 12:06:44.717239 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 19 12:06:44.718001 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 19 12:06:44.780265 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 19 12:06:44.780900 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 19 12:06:44.821098 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 19 12:06:44.821332 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 19 12:06:44.851963 systemd[1]: Stopped target network.target - Network. Jan 19 12:06:45.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:44.879208 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 19 12:06:44.879296 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 19 12:06:44.905082 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 19 12:06:44.905165 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 19 12:06:44.931836 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 19 12:06:44.931910 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 19 12:06:44.945046 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 19 12:06:44.945112 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 19 12:06:44.973981 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 19 12:06:45.008255 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 19 12:06:45.073058 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 19 12:06:45.168231 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 19 12:06:45.168915 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 19 12:06:45.416000 audit: BPF prog-id=6 op=UNLOAD Jan 19 12:06:45.424371 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 19 12:06:45.425211 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 19 12:06:45.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:45.487971 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 19 12:06:45.488336 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 19 12:06:45.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:45.506000 audit: BPF prog-id=9 op=UNLOAD Jan 19 12:06:45.507197 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 19 12:06:45.538204 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 19 12:06:45.538265 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 19 12:06:45.615095 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 19 12:06:45.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:45.615208 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 19 12:06:45.658036 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 19 12:06:45.673349 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 19 12:06:45.673949 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 19 12:06:45.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:45.750854 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 19 12:06:45.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:45.750938 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 19 12:06:45.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:45.786203 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 19 12:06:45.786268 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 19 12:06:45.825100 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 19 12:06:45.952024 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 19 12:06:45.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:45.952216 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 19 12:06:45.976385 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 19 12:06:46.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:45.976929 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 19 12:06:45.997028 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 19 12:06:45.997087 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 19 12:06:45.997358 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 19 12:06:46.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:46.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:45.998014 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 19 12:06:46.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:46.094101 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 19 12:06:46.094188 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 19 12:06:46.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:46.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:46.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:46.130027 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 19 12:06:46.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:46.130098 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 19 12:06:46.165402 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 19 12:06:46.178918 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 19 12:06:46.179002 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 19 12:06:46.207979 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 19 12:06:46.208044 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 19 12:06:46.278000 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 19 12:06:46.278095 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 19 12:06:46.301097 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 19 12:06:46.301180 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 19 12:06:46.324265 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 19 12:06:46.324337 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 19 12:06:46.650015 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 19 12:06:46.670853 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 19 12:06:46.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:46.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:46.718207 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 19 12:06:46.719070 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 19 12:06:46.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:46.746243 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 19 12:06:46.780299 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 19 12:06:46.866069 systemd[1]: Switching root. Jan 19 12:06:46.938240 systemd-journald[319]: Journal stopped Jan 19 12:06:52.118250 systemd-journald[319]: Received SIGTERM from PID 1 (systemd). Jan 19 12:06:52.118326 kernel: SELinux: policy capability network_peer_controls=1 Jan 19 12:06:52.118354 kernel: SELinux: policy capability open_perms=1 Jan 19 12:06:52.118365 kernel: SELinux: policy capability extended_socket_class=1 Jan 19 12:06:52.118381 kernel: SELinux: policy capability always_check_network=0 Jan 19 12:06:52.118394 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 19 12:06:52.118407 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 19 12:06:52.118940 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 19 12:06:52.118956 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 19 12:06:52.118967 kernel: SELinux: policy capability userspace_initial_context=0 Jan 19 12:06:52.118979 kernel: kauditd_printk_skb: 35 callbacks suppressed Jan 19 12:06:52.119001 kernel: audit: type=1403 audit(1768824407.333:85): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 19 12:06:52.119025 systemd[1]: Successfully loaded SELinux policy in 189.907ms. Jan 19 12:06:52.119039 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 25.143ms. Jan 19 12:06:52.119052 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 19 12:06:52.119064 systemd[1]: Detected virtualization kvm. Jan 19 12:06:52.119076 systemd[1]: Detected architecture x86-64. Jan 19 12:06:52.119087 systemd[1]: Detected first boot. Jan 19 12:06:52.119099 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 19 12:06:52.119112 kernel: audit: type=1334 audit(1768824407.665:86): prog-id=10 op=LOAD Jan 19 12:06:52.119124 kernel: audit: type=1334 audit(1768824407.665:87): prog-id=10 op=UNLOAD Jan 19 12:06:52.119135 kernel: audit: type=1334 audit(1768824407.665:88): prog-id=11 op=LOAD Jan 19 12:06:52.119145 kernel: audit: type=1334 audit(1768824407.665:89): prog-id=11 op=UNLOAD Jan 19 12:06:52.119156 zram_generator::config[1149]: No configuration found. Jan 19 12:06:52.119170 kernel: Guest personality initialized and is inactive Jan 19 12:06:52.119181 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 19 12:06:52.119194 kernel: Initialized host personality Jan 19 12:06:52.119206 kernel: NET: Registered PF_VSOCK protocol family Jan 19 12:06:52.119217 systemd[1]: Populated /etc with preset unit settings. Jan 19 12:06:52.119228 kernel: audit: type=1334 audit(1768824409.407:90): prog-id=12 op=LOAD Jan 19 12:06:52.119239 kernel: audit: type=1334 audit(1768824409.407:91): prog-id=3 op=UNLOAD Jan 19 12:06:52.119249 kernel: audit: type=1334 audit(1768824409.407:92): prog-id=13 op=LOAD Jan 19 12:06:52.119260 kernel: audit: type=1334 audit(1768824409.407:93): prog-id=14 op=LOAD Jan 19 12:06:52.119273 kernel: audit: type=1334 audit(1768824409.407:94): prog-id=4 op=UNLOAD Jan 19 12:06:52.119284 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 19 12:06:52.119296 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 19 12:06:52.119307 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 19 12:06:52.119322 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 19 12:06:52.119341 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 19 12:06:52.119354 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 19 12:06:52.119366 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 19 12:06:52.119378 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 19 12:06:52.119391 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 19 12:06:52.119404 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 19 12:06:52.119934 systemd[1]: Created slice user.slice - User and Session Slice. Jan 19 12:06:52.119957 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 19 12:06:52.119975 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 19 12:06:52.119996 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 19 12:06:52.120014 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 19 12:06:52.120030 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 19 12:06:52.120042 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 19 12:06:52.120054 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 19 12:06:52.120065 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 19 12:06:52.120077 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 19 12:06:52.120089 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 19 12:06:52.120100 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 19 12:06:52.120114 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 19 12:06:52.120126 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 19 12:06:52.120138 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 19 12:06:52.120150 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 19 12:06:52.120162 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 19 12:06:52.120173 systemd[1]: Reached target slices.target - Slice Units. Jan 19 12:06:52.120185 systemd[1]: Reached target swap.target - Swaps. Jan 19 12:06:52.120196 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 19 12:06:52.120210 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 19 12:06:52.120222 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 19 12:06:52.120233 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 19 12:06:52.120245 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 19 12:06:52.120256 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 19 12:06:52.120271 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 19 12:06:52.120282 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 19 12:06:52.120296 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 19 12:06:52.120308 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 19 12:06:52.120319 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 19 12:06:52.120330 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 19 12:06:52.120342 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 19 12:06:52.120353 systemd[1]: Mounting media.mount - External Media Directory... Jan 19 12:06:52.120364 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 19 12:06:52.120378 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 19 12:06:52.120389 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 19 12:06:52.120401 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 19 12:06:52.120413 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 19 12:06:52.120976 systemd[1]: Reached target machines.target - Containers. Jan 19 12:06:52.120994 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 19 12:06:52.121020 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 19 12:06:52.121040 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 19 12:06:52.121056 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 19 12:06:52.121072 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 19 12:06:52.121088 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 19 12:06:52.121106 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 19 12:06:52.121128 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 19 12:06:52.121143 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 19 12:06:52.121155 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 19 12:06:52.121167 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 19 12:06:52.121178 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 19 12:06:52.121189 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 19 12:06:52.121200 systemd[1]: Stopped systemd-fsck-usr.service. Jan 19 12:06:52.121213 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 19 12:06:52.121226 kernel: ACPI: bus type drm_connector registered Jan 19 12:06:52.121238 kernel: fuse: init (API version 7.41) Jan 19 12:06:52.121249 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 19 12:06:52.121261 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 19 12:06:52.121277 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 19 12:06:52.121289 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 19 12:06:52.121322 systemd-journald[1235]: Collecting audit messages is enabled. Jan 19 12:06:52.121345 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 19 12:06:52.121357 systemd-journald[1235]: Journal started Jan 19 12:06:52.121380 systemd-journald[1235]: Runtime Journal (/run/log/journal/06c5738ad16d42cc946ca2e3a2030fb9) is 6M, max 48M, 42M free. Jan 19 12:06:50.633000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 19 12:06:51.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:51.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:51.812000 audit: BPF prog-id=14 op=UNLOAD Jan 19 12:06:51.813000 audit: BPF prog-id=13 op=UNLOAD Jan 19 12:06:51.852000 audit: BPF prog-id=15 op=LOAD Jan 19 12:06:51.872000 audit: BPF prog-id=16 op=LOAD Jan 19 12:06:51.878000 audit: BPF prog-id=17 op=LOAD Jan 19 12:06:52.085000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 19 12:06:52.085000 audit[1235]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffe75cda5c0 a2=4000 a3=0 items=0 ppid=1 pid=1235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:06:52.085000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 19 12:06:49.379046 systemd[1]: Queued start job for default target multi-user.target. Jan 19 12:06:49.441235 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 19 12:06:49.452209 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 19 12:06:49.453978 systemd[1]: systemd-journald.service: Consumed 4.871s CPU time. Jan 19 12:06:52.181204 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 19 12:06:52.181256 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 19 12:06:52.250941 systemd[1]: Started systemd-journald.service - Journal Service. Jan 19 12:06:52.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.254258 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 19 12:06:52.274057 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 19 12:06:52.293973 systemd[1]: Mounted media.mount - External Media Directory. Jan 19 12:06:52.312352 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 19 12:06:52.332928 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 19 12:06:52.353227 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 19 12:06:52.372040 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 19 12:06:52.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.393134 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 19 12:06:52.404158 kernel: kauditd_printk_skb: 17 callbacks suppressed Jan 19 12:06:52.404195 kernel: audit: type=1130 audit(1768824412.390:110): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.474917 kernel: audit: type=1130 audit(1768824412.473:111): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.474922 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 19 12:06:52.475270 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 19 12:06:52.545361 kernel: audit: type=1130 audit(1768824412.544:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.546251 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 19 12:06:52.547353 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 19 12:06:52.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.588080 kernel: audit: type=1131 audit(1768824412.544:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.647321 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 19 12:06:52.648067 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 19 12:06:52.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.686933 kernel: audit: type=1130 audit(1768824412.645:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.686958 kernel: audit: type=1131 audit(1768824412.645:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.744906 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 19 12:06:52.745364 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 19 12:06:52.786938 kernel: audit: type=1130 audit(1768824412.742:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.834123 kernel: audit: type=1131 audit(1768824412.742:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.854086 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 19 12:06:52.854908 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 19 12:06:52.891210 kernel: audit: type=1130 audit(1768824412.852:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.891267 kernel: audit: type=1131 audit(1768824412.852:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.948137 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 19 12:06:52.949924 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 19 12:06:52.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.972002 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 19 12:06:52.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:52.994413 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 19 12:06:53.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:53.025999 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 19 12:06:53.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:53.051093 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 19 12:06:53.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:53.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:53.083374 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 19 12:06:53.136405 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 19 12:06:53.160036 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 19 12:06:53.188361 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 19 12:06:53.222927 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 19 12:06:53.245003 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 19 12:06:53.245208 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 19 12:06:53.270113 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 19 12:06:53.295378 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 19 12:06:53.296132 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 19 12:06:53.305105 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 19 12:06:53.341381 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 19 12:06:53.364002 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 19 12:06:53.367188 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 19 12:06:53.388236 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 19 12:06:53.394199 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 19 12:06:53.411406 systemd-journald[1235]: Time spent on flushing to /var/log/journal/06c5738ad16d42cc946ca2e3a2030fb9 is 194.627ms for 1213 entries. Jan 19 12:06:53.411406 systemd-journald[1235]: System Journal (/var/log/journal/06c5738ad16d42cc946ca2e3a2030fb9) is 8M, max 163.5M, 155.5M free. Jan 19 12:06:53.670348 systemd-journald[1235]: Received client request to flush runtime journal. Jan 19 12:06:53.672358 kernel: loop1: detected capacity change from 0 to 50784 Jan 19 12:06:53.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:53.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:53.434122 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 19 12:06:53.459332 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 19 12:06:53.492288 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 19 12:06:53.512990 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 19 12:06:53.551409 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 19 12:06:53.580073 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 19 12:06:53.605401 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 19 12:06:53.641362 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 19 12:06:53.680042 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 19 12:06:53.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:53.727887 kernel: loop2: detected capacity change from 0 to 111560 Jan 19 12:06:53.739056 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 19 12:06:53.746054 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jan 19 12:06:53.746066 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jan 19 12:06:53.747016 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 19 12:06:53.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:53.779358 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 19 12:06:53.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:53.810117 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 19 12:06:53.875224 kernel: loop3: detected capacity change from 0 to 219144 Jan 19 12:06:53.961958 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 19 12:06:53.980011 kernel: loop4: detected capacity change from 0 to 50784 Jan 19 12:06:53.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:53.997000 audit: BPF prog-id=18 op=LOAD Jan 19 12:06:53.998000 audit: BPF prog-id=19 op=LOAD Jan 19 12:06:53.998000 audit: BPF prog-id=20 op=LOAD Jan 19 12:06:54.004939 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 19 12:06:54.030000 audit: BPF prog-id=21 op=LOAD Jan 19 12:06:54.037959 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 19 12:06:54.048080 kernel: loop5: detected capacity change from 0 to 111560 Jan 19 12:06:54.076061 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 19 12:06:54.116306 kernel: loop6: detected capacity change from 0 to 219144 Jan 19 12:06:54.121000 audit: BPF prog-id=22 op=LOAD Jan 19 12:06:54.121000 audit: BPF prog-id=23 op=LOAD Jan 19 12:06:54.121000 audit: BPF prog-id=24 op=LOAD Jan 19 12:06:54.126019 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 19 12:06:54.150000 audit: BPF prog-id=25 op=LOAD Jan 19 12:06:54.150000 audit: BPF prog-id=26 op=LOAD Jan 19 12:06:54.150000 audit: BPF prog-id=27 op=LOAD Jan 19 12:06:54.163925 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 19 12:06:54.166351 (sd-merge)[1292]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 19 12:06:54.177285 (sd-merge)[1292]: Merged extensions into '/usr'. Jan 19 12:06:54.213035 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Jan 19 12:06:54.213320 systemd[1]: Reload requested from client PID 1270 ('systemd-sysext') (unit systemd-sysext.service)... Jan 19 12:06:54.213334 systemd[1]: Reloading... Jan 19 12:06:54.213413 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Jan 19 12:06:54.336260 systemd-nsresourced[1296]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 19 12:06:54.490199 zram_generator::config[1337]: No configuration found. Jan 19 12:06:54.664973 systemd-oomd[1293]: No swap; memory pressure usage will be degraded Jan 19 12:06:54.678113 systemd-resolved[1294]: Positive Trust Anchors: Jan 19 12:06:54.678282 systemd-resolved[1294]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 19 12:06:54.678290 systemd-resolved[1294]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 19 12:06:54.678330 systemd-resolved[1294]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 19 12:06:54.699133 systemd-resolved[1294]: Defaulting to hostname 'linux'. Jan 19 12:06:55.031194 systemd[1]: Reloading finished in 816 ms. Jan 19 12:06:55.128162 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 19 12:06:55.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:55.156038 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 19 12:06:55.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:55.187339 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 19 12:06:55.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:55.218364 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 19 12:06:55.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:55.242185 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 19 12:06:55.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:55.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:55.270122 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 19 12:06:55.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:55.299201 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 19 12:06:55.354030 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 19 12:06:55.401327 systemd[1]: Starting ensure-sysext.service... Jan 19 12:06:55.443000 audit: BPF prog-id=8 op=UNLOAD Jan 19 12:06:55.443000 audit: BPF prog-id=7 op=UNLOAD Jan 19 12:06:55.420375 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 19 12:06:55.444000 audit: BPF prog-id=28 op=LOAD Jan 19 12:06:55.444000 audit: BPF prog-id=29 op=LOAD Jan 19 12:06:55.448861 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 19 12:06:55.472000 audit: BPF prog-id=30 op=LOAD Jan 19 12:06:55.472000 audit: BPF prog-id=21 op=UNLOAD Jan 19 12:06:55.476000 audit: BPF prog-id=31 op=LOAD Jan 19 12:06:55.480000 audit: BPF prog-id=18 op=UNLOAD Jan 19 12:06:55.481000 audit: BPF prog-id=32 op=LOAD Jan 19 12:06:55.481000 audit: BPF prog-id=33 op=LOAD Jan 19 12:06:55.482000 audit: BPF prog-id=19 op=UNLOAD Jan 19 12:06:55.483000 audit: BPF prog-id=20 op=UNLOAD Jan 19 12:06:55.485000 audit: BPF prog-id=34 op=LOAD Jan 19 12:06:55.485000 audit: BPF prog-id=25 op=UNLOAD Jan 19 12:06:55.487000 audit: BPF prog-id=35 op=LOAD Jan 19 12:06:55.487000 audit: BPF prog-id=36 op=LOAD Jan 19 12:06:55.487000 audit: BPF prog-id=26 op=UNLOAD Jan 19 12:06:55.487000 audit: BPF prog-id=27 op=UNLOAD Jan 19 12:06:55.490000 audit: BPF prog-id=37 op=LOAD Jan 19 12:06:55.490000 audit: BPF prog-id=15 op=UNLOAD Jan 19 12:06:55.490000 audit: BPF prog-id=38 op=LOAD Jan 19 12:06:55.490000 audit: BPF prog-id=39 op=LOAD Jan 19 12:06:55.490000 audit: BPF prog-id=16 op=UNLOAD Jan 19 12:06:55.490000 audit: BPF prog-id=17 op=UNLOAD Jan 19 12:06:55.493000 audit: BPF prog-id=40 op=LOAD Jan 19 12:06:55.494000 audit: BPF prog-id=22 op=UNLOAD Jan 19 12:06:55.494000 audit: BPF prog-id=41 op=LOAD Jan 19 12:06:55.494000 audit: BPF prog-id=42 op=LOAD Jan 19 12:06:55.494000 audit: BPF prog-id=23 op=UNLOAD Jan 19 12:06:55.494000 audit: BPF prog-id=24 op=UNLOAD Jan 19 12:06:55.515249 systemd[1]: Reload requested from client PID 1379 ('systemctl') (unit ensure-sysext.service)... Jan 19 12:06:55.516012 systemd[1]: Reloading... Jan 19 12:06:55.522217 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 19 12:06:55.522275 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 19 12:06:55.523393 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 19 12:06:55.528059 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Jan 19 12:06:55.528162 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Jan 19 12:06:55.555167 systemd-udevd[1381]: Using default interface naming scheme 'v257'. Jan 19 12:06:55.558127 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Jan 19 12:06:55.558141 systemd-tmpfiles[1380]: Skipping /boot Jan 19 12:06:55.607091 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Jan 19 12:06:55.607247 systemd-tmpfiles[1380]: Skipping /boot Jan 19 12:06:55.765296 zram_generator::config[1411]: No configuration found. Jan 19 12:06:55.994240 kernel: mousedev: PS/2 mouse device common for all mice Jan 19 12:06:56.060999 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 19 12:06:56.081017 kernel: ACPI: button: Power Button [PWRF] Jan 19 12:06:56.144954 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 19 12:06:56.159114 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 19 12:06:56.205181 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 19 12:06:56.445317 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 19 12:06:56.470401 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 19 12:06:56.473086 systemd[1]: Reloading finished in 956 ms. Jan 19 12:06:56.521115 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 19 12:06:56.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:56.553000 audit: BPF prog-id=43 op=LOAD Jan 19 12:06:56.553000 audit: BPF prog-id=40 op=UNLOAD Jan 19 12:06:56.553000 audit: BPF prog-id=44 op=LOAD Jan 19 12:06:56.553000 audit: BPF prog-id=45 op=LOAD Jan 19 12:06:56.554000 audit: BPF prog-id=41 op=UNLOAD Jan 19 12:06:56.554000 audit: BPF prog-id=42 op=UNLOAD Jan 19 12:06:56.555000 audit: BPF prog-id=46 op=LOAD Jan 19 12:06:56.556000 audit: BPF prog-id=47 op=LOAD Jan 19 12:06:56.556000 audit: BPF prog-id=28 op=UNLOAD Jan 19 12:06:56.556000 audit: BPF prog-id=29 op=UNLOAD Jan 19 12:06:56.559000 audit: BPF prog-id=48 op=LOAD Jan 19 12:06:56.559000 audit: BPF prog-id=30 op=UNLOAD Jan 19 12:06:56.562000 audit: BPF prog-id=49 op=LOAD Jan 19 12:06:56.563000 audit: BPF prog-id=31 op=UNLOAD Jan 19 12:06:56.563000 audit: BPF prog-id=50 op=LOAD Jan 19 12:06:56.563000 audit: BPF prog-id=51 op=LOAD Jan 19 12:06:56.563000 audit: BPF prog-id=32 op=UNLOAD Jan 19 12:06:56.563000 audit: BPF prog-id=33 op=UNLOAD Jan 19 12:06:56.567000 audit: BPF prog-id=52 op=LOAD Jan 19 12:06:56.567000 audit: BPF prog-id=34 op=UNLOAD Jan 19 12:06:56.567000 audit: BPF prog-id=53 op=LOAD Jan 19 12:06:56.568000 audit: BPF prog-id=54 op=LOAD Jan 19 12:06:56.568000 audit: BPF prog-id=35 op=UNLOAD Jan 19 12:06:56.568000 audit: BPF prog-id=36 op=UNLOAD Jan 19 12:06:56.570000 audit: BPF prog-id=55 op=LOAD Jan 19 12:06:56.571000 audit: BPF prog-id=37 op=UNLOAD Jan 19 12:06:56.571000 audit: BPF prog-id=56 op=LOAD Jan 19 12:06:56.571000 audit: BPF prog-id=57 op=LOAD Jan 19 12:06:56.571000 audit: BPF prog-id=38 op=UNLOAD Jan 19 12:06:56.571000 audit: BPF prog-id=39 op=UNLOAD Jan 19 12:06:56.792904 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 19 12:06:56.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:57.128907 systemd[1]: Finished ensure-sysext.service. Jan 19 12:06:57.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:57.268388 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 19 12:06:57.281360 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 19 12:06:57.314245 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 19 12:06:57.339384 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 19 12:06:57.482389 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 19 12:06:57.529399 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 19 12:06:57.571399 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 19 12:06:57.635206 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 19 12:06:57.678119 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 19 12:06:57.680200 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 19 12:06:57.705302 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 19 12:06:57.785166 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 19 12:06:57.810364 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 19 12:06:57.826075 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 19 12:06:57.886907 kernel: kauditd_printk_skb: 95 callbacks suppressed Jan 19 12:06:57.887036 kernel: audit: type=1334 audit(1768824417.854:215): prog-id=58 op=LOAD Jan 19 12:06:57.854000 audit: BPF prog-id=58 op=LOAD Jan 19 12:06:57.891978 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 19 12:06:57.949000 audit: BPF prog-id=59 op=LOAD Jan 19 12:06:57.970963 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 19 12:06:57.998300 kernel: audit: type=1334 audit(1768824417.949:216): prog-id=59 op=LOAD Jan 19 12:06:58.022915 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 19 12:06:58.086983 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 19 12:06:58.114328 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 19 12:06:58.126232 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 19 12:06:58.130094 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 19 12:06:58.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:58.166327 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 19 12:06:58.170304 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 19 12:06:58.216161 kernel: audit: type=1130 audit(1768824418.162:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:58.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:58.268849 kernel: audit: type=1131 audit(1768824418.163:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:58.339289 kernel: audit: type=1127 audit(1768824418.166:219): pid=1524 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 19 12:06:58.166000 audit[1524]: SYSTEM_BOOT pid=1524 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 19 12:06:58.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:58.358320 augenrules[1530]: No rules Jan 19 12:06:58.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:58.405405 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 19 12:06:58.437185 kernel: audit: type=1130 audit(1768824418.340:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:58.437362 kernel: audit: type=1131 audit(1768824418.340:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:06:58.356000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 19 12:06:58.558006 kernel: audit: type=1305 audit(1768824418.356:222): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 19 12:06:58.558102 kernel: audit: type=1300 audit(1768824418.356:222): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd5e328510 a2=420 a3=0 items=0 ppid=1497 pid=1530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:06:58.356000 audit[1530]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd5e328510 a2=420 a3=0 items=0 ppid=1497 pid=1530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:06:58.356000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 19 12:06:58.598989 kernel: audit: type=1327 audit(1768824418.356:222): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 19 12:06:58.636283 systemd[1]: audit-rules.service: Deactivated successfully. Jan 19 12:06:58.652191 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 19 12:06:58.679343 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 19 12:06:58.682128 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 19 12:06:58.684152 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 19 12:06:58.685179 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 19 12:06:58.713930 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 19 12:06:58.715959 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 19 12:06:58.896989 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 19 12:06:58.965316 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 19 12:06:58.967094 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 19 12:06:58.967154 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 19 12:06:59.094091 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 19 12:06:59.648064 kernel: kvm_amd: TSC scaling supported Jan 19 12:06:59.648143 kernel: kvm_amd: Nested Virtualization enabled Jan 19 12:06:59.677023 kernel: kvm_amd: Nested Paging enabled Jan 19 12:06:59.677109 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 19 12:06:59.674380 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 19 12:06:59.687223 kernel: kvm_amd: PMU virtualization is disabled Jan 19 12:06:59.694305 systemd-networkd[1519]: lo: Link UP Jan 19 12:06:59.694316 systemd-networkd[1519]: lo: Gained carrier Jan 19 12:06:59.702107 systemd-networkd[1519]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 19 12:06:59.702116 systemd-networkd[1519]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 19 12:06:59.722012 systemd-networkd[1519]: eth0: Link UP Jan 19 12:06:59.730106 systemd-networkd[1519]: eth0: Gained carrier Jan 19 12:06:59.730132 systemd-networkd[1519]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 19 12:06:59.773362 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 19 12:06:59.812214 systemd[1]: Reached target network.target - Network. Jan 19 12:06:59.842069 systemd[1]: Reached target time-set.target - System Time Set. Jan 19 12:06:59.895076 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 19 12:06:59.936220 systemd-networkd[1519]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 19 12:06:59.940255 systemd-timesyncd[1522]: Network configuration changed, trying to establish connection. Jan 19 12:07:01.142681 systemd-timesyncd[1522]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 19 12:07:01.142729 systemd-timesyncd[1522]: Initial clock synchronization to Mon 2026-01-19 12:07:01.142598 UTC. Jan 19 12:07:01.142769 systemd-resolved[1294]: Clock change detected. Flushing caches. Jan 19 12:07:01.146043 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 19 12:07:01.370999 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 19 12:07:01.935388 systemd-networkd[1519]: eth0: Gained IPv6LL Jan 19 12:07:01.958073 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 19 12:07:02.013654 systemd[1]: Reached target network-online.target - Network is Online. Jan 19 12:07:02.438912 kernel: EDAC MC: Ver: 3.0.0 Jan 19 12:07:02.948893 ldconfig[1512]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 19 12:07:02.972866 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 19 12:07:03.006782 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 19 12:07:03.120714 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 19 12:07:03.146043 systemd[1]: Reached target sysinit.target - System Initialization. Jan 19 12:07:03.169822 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 19 12:07:03.194780 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 19 12:07:03.219943 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 19 12:07:03.245018 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 19 12:07:03.272033 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 19 12:07:03.297901 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 19 12:07:03.325004 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 19 12:07:03.351924 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 19 12:07:03.376643 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 19 12:07:03.376836 systemd[1]: Reached target paths.target - Path Units. Jan 19 12:07:03.394889 systemd[1]: Reached target timers.target - Timer Units. Jan 19 12:07:03.422054 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 19 12:07:03.457063 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 19 12:07:03.486938 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 19 12:07:03.516805 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 19 12:07:03.547850 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 19 12:07:03.580872 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 19 12:07:03.600793 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 19 12:07:03.625819 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 19 12:07:03.650744 systemd[1]: Reached target sockets.target - Socket Units. Jan 19 12:07:03.666786 systemd[1]: Reached target basic.target - Basic System. Jan 19 12:07:03.682844 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 19 12:07:03.682869 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 19 12:07:03.686929 systemd[1]: Starting containerd.service - containerd container runtime... Jan 19 12:07:03.732655 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 19 12:07:03.757843 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 19 12:07:03.798973 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 19 12:07:03.825673 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 19 12:07:03.860957 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 19 12:07:03.882715 jq[1568]: false Jan 19 12:07:03.881711 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 19 12:07:03.885014 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 19 12:07:03.908890 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 12:07:03.936721 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 19 12:07:03.963748 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 19 12:07:03.976647 oslogin_cache_refresh[1570]: Refreshing passwd entry cache Jan 19 12:07:03.985778 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Refreshing passwd entry cache Jan 19 12:07:03.998681 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 19 12:07:04.000975 oslogin_cache_refresh[1570]: Failure getting users, quitting Jan 19 12:07:04.011723 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Failure getting users, quitting Jan 19 12:07:04.011723 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 19 12:07:04.011723 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Refreshing group entry cache Jan 19 12:07:04.000998 oslogin_cache_refresh[1570]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 19 12:07:04.001065 oslogin_cache_refresh[1570]: Refreshing group entry cache Jan 19 12:07:04.015676 extend-filesystems[1569]: Found /dev/vda6 Jan 19 12:07:04.035801 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Failure getting groups, quitting Jan 19 12:07:04.035085 oslogin_cache_refresh[1570]: Failure getting groups, quitting Jan 19 12:07:04.038597 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 19 12:07:04.037978 oslogin_cache_refresh[1570]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 19 12:07:04.045007 extend-filesystems[1569]: Found /dev/vda9 Jan 19 12:07:04.058743 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 19 12:07:04.063703 extend-filesystems[1569]: Checking size of /dev/vda9 Jan 19 12:07:04.102874 extend-filesystems[1569]: Resized partition /dev/vda9 Jan 19 12:07:04.119862 extend-filesystems[1594]: resize2fs 1.47.3 (8-Jul-2025) Jan 19 12:07:04.175572 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 19 12:07:04.125668 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 19 12:07:04.187862 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 19 12:07:04.206661 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 19 12:07:04.209692 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 19 12:07:04.213766 systemd[1]: Starting update-engine.service - Update Engine... Jan 19 12:07:04.238705 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 19 12:07:04.284844 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 19 12:07:04.308680 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 19 12:07:04.309086 jq[1600]: true Jan 19 12:07:04.309670 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 19 12:07:04.310087 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 19 12:07:04.313680 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 19 12:07:04.341635 systemd[1]: motdgen.service: Deactivated successfully. Jan 19 12:07:04.341922 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 19 12:07:04.393749 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 19 12:07:04.409981 update_engine[1599]: I20260119 12:07:04.407781 1599 main.cc:92] Flatcar Update Engine starting Jan 19 12:07:04.433922 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 19 12:07:04.433782 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 19 12:07:04.434731 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 19 12:07:04.490083 extend-filesystems[1594]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 19 12:07:04.490083 extend-filesystems[1594]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 19 12:07:04.490083 extend-filesystems[1594]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 19 12:07:04.587077 extend-filesystems[1569]: Resized filesystem in /dev/vda9 Jan 19 12:07:04.608966 jq[1617]: true Jan 19 12:07:04.618932 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 19 12:07:04.621918 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 19 12:07:04.645901 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 19 12:07:04.646896 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 19 12:07:04.682064 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 19 12:07:04.685811 tar[1616]: linux-amd64/LICENSE Jan 19 12:07:04.685811 tar[1616]: linux-amd64/helm Jan 19 12:07:04.686944 systemd-logind[1598]: Watching system buttons on /dev/input/event2 (Power Button) Jan 19 12:07:04.686985 systemd-logind[1598]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 19 12:07:04.690018 systemd-logind[1598]: New seat seat0. Jan 19 12:07:04.696030 systemd[1]: Started systemd-logind.service - User Login Management. Jan 19 12:07:04.835984 dbus-daemon[1566]: [system] SELinux support is enabled Jan 19 12:07:04.837037 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 19 12:07:04.851040 update_engine[1599]: I20260119 12:07:04.850975 1599 update_check_scheduler.cc:74] Next update check in 3m19s Jan 19 12:07:04.868660 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 19 12:07:04.869078 dbus-daemon[1566]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 19 12:07:04.868706 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 19 12:07:04.892861 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 19 12:07:04.892897 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 19 12:07:04.917849 systemd[1]: Started update-engine.service - Update Engine. Jan 19 12:07:04.928751 sshd_keygen[1615]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 19 12:07:04.953592 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 19 12:07:05.039008 bash[1652]: Updated "/home/core/.ssh/authorized_keys" Jan 19 12:07:05.045907 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 19 12:07:05.078779 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 19 12:07:05.138661 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 19 12:07:05.171656 locksmithd[1654]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 19 12:07:05.180726 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 19 12:07:05.275906 systemd[1]: issuegen.service: Deactivated successfully. Jan 19 12:07:05.276951 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 19 12:07:05.314881 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 19 12:07:05.375031 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 19 12:07:05.418877 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 19 12:07:05.468898 containerd[1618]: time="2026-01-19T12:07:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 19 12:07:05.471690 containerd[1618]: time="2026-01-19T12:07:05.470906935Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 19 12:07:05.483065 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 19 12:07:05.512005 systemd[1]: Reached target getty.target - Login Prompts. Jan 19 12:07:05.551650 containerd[1618]: time="2026-01-19T12:07:05.550982451Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.281µs" Jan 19 12:07:05.551650 containerd[1618]: time="2026-01-19T12:07:05.551025832Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 19 12:07:05.551650 containerd[1618]: time="2026-01-19T12:07:05.551069463Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 19 12:07:05.551844 containerd[1618]: time="2026-01-19T12:07:05.551823100Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 19 12:07:05.553663 containerd[1618]: time="2026-01-19T12:07:05.552080751Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 19 12:07:05.553749 containerd[1618]: time="2026-01-19T12:07:05.553729900Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 19 12:07:05.554434 containerd[1618]: time="2026-01-19T12:07:05.553867166Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 19 12:07:05.558658 containerd[1618]: time="2026-01-19T12:07:05.557903437Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 19 12:07:05.559860 containerd[1618]: time="2026-01-19T12:07:05.559042423Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 19 12:07:05.559860 containerd[1618]: time="2026-01-19T12:07:05.559717734Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 19 12:07:05.559860 containerd[1618]: time="2026-01-19T12:07:05.559743122Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 19 12:07:05.559860 containerd[1618]: time="2026-01-19T12:07:05.559756497Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 19 12:07:05.561036 containerd[1618]: time="2026-01-19T12:07:05.559981827Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 19 12:07:05.561036 containerd[1618]: time="2026-01-19T12:07:05.560777172Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 19 12:07:05.561640 containerd[1618]: time="2026-01-19T12:07:05.561064379Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 19 12:07:05.562702 containerd[1618]: time="2026-01-19T12:07:05.561894488Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 19 12:07:05.562702 containerd[1618]: time="2026-01-19T12:07:05.562081567Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 19 12:07:05.562788 containerd[1618]: time="2026-01-19T12:07:05.562706855Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 19 12:07:05.562788 containerd[1618]: time="2026-01-19T12:07:05.562763120Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 19 12:07:05.568595 containerd[1618]: time="2026-01-19T12:07:05.567750973Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 19 12:07:05.568595 containerd[1618]: time="2026-01-19T12:07:05.567982836Z" level=info msg="metadata content store policy set" policy=shared Jan 19 12:07:05.620680 containerd[1618]: time="2026-01-19T12:07:05.619740054Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 19 12:07:05.620680 containerd[1618]: time="2026-01-19T12:07:05.619959023Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 19 12:07:05.620680 containerd[1618]: time="2026-01-19T12:07:05.620070341Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 19 12:07:05.624533 containerd[1618]: time="2026-01-19T12:07:05.623644804Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 19 12:07:05.624533 containerd[1618]: time="2026-01-19T12:07:05.623693685Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 19 12:07:05.624533 containerd[1618]: time="2026-01-19T12:07:05.623711248Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 19 12:07:05.624533 containerd[1618]: time="2026-01-19T12:07:05.623725535Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 19 12:07:05.624533 containerd[1618]: time="2026-01-19T12:07:05.623741093Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 19 12:07:05.624533 containerd[1618]: time="2026-01-19T12:07:05.623761862Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 19 12:07:05.624533 containerd[1618]: time="2026-01-19T12:07:05.623775979Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 19 12:07:05.624533 containerd[1618]: time="2026-01-19T12:07:05.623788632Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 19 12:07:05.624533 containerd[1618]: time="2026-01-19T12:07:05.623803139Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 19 12:07:05.624533 containerd[1618]: time="2026-01-19T12:07:05.623818829Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 19 12:07:05.624533 containerd[1618]: time="2026-01-19T12:07:05.623841130Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 19 12:07:05.624533 containerd[1618]: time="2026-01-19T12:07:05.624020255Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 19 12:07:05.624533 containerd[1618]: time="2026-01-19T12:07:05.624044229Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 19 12:07:05.624533 containerd[1618]: time="2026-01-19T12:07:05.624062915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 19 12:07:05.624971 containerd[1618]: time="2026-01-19T12:07:05.624076249Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 19 12:07:05.624971 containerd[1618]: time="2026-01-19T12:07:05.624812614Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 19 12:07:05.624971 containerd[1618]: time="2026-01-19T12:07:05.624840086Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 19 12:07:05.624971 containerd[1618]: time="2026-01-19T12:07:05.624855885Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 19 12:07:05.624971 containerd[1618]: time="2026-01-19T12:07:05.624868568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 19 12:07:05.624971 containerd[1618]: time="2026-01-19T12:07:05.624882144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 19 12:07:05.624971 containerd[1618]: time="2026-01-19T12:07:05.624895579Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 19 12:07:05.624971 containerd[1618]: time="2026-01-19T12:07:05.624907902Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 19 12:07:05.624971 containerd[1618]: time="2026-01-19T12:07:05.624938189Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 19 12:07:05.625819 containerd[1618]: time="2026-01-19T12:07:05.624987811Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 19 12:07:05.625819 containerd[1618]: time="2026-01-19T12:07:05.625002799Z" level=info msg="Start snapshots syncer" Jan 19 12:07:05.625819 containerd[1618]: time="2026-01-19T12:07:05.625038336Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 19 12:07:05.631531 containerd[1618]: time="2026-01-19T12:07:05.628016786Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 19 12:07:05.631531 containerd[1618]: time="2026-01-19T12:07:05.630682172Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 19 12:07:05.632907 containerd[1618]: time="2026-01-19T12:07:05.630756431Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 19 12:07:05.632907 containerd[1618]: time="2026-01-19T12:07:05.630934643Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 19 12:07:05.632907 containerd[1618]: time="2026-01-19T12:07:05.630964018Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 19 12:07:05.632907 containerd[1618]: time="2026-01-19T12:07:05.630978646Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 19 12:07:05.632907 containerd[1618]: time="2026-01-19T12:07:05.631003081Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 19 12:07:05.632907 containerd[1618]: time="2026-01-19T12:07:05.631019142Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 19 12:07:05.632907 containerd[1618]: time="2026-01-19T12:07:05.631032877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 19 12:07:05.632907 containerd[1618]: time="2026-01-19T12:07:05.631048687Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 19 12:07:05.632907 containerd[1618]: time="2026-01-19T12:07:05.631064005Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 19 12:07:05.632907 containerd[1618]: time="2026-01-19T12:07:05.631085455Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 19 12:07:05.632907 containerd[1618]: time="2026-01-19T12:07:05.631703929Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 19 12:07:05.632907 containerd[1618]: time="2026-01-19T12:07:05.631724889Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 19 12:07:05.632907 containerd[1618]: time="2026-01-19T12:07:05.631742712Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 19 12:07:05.633786 containerd[1618]: time="2026-01-19T12:07:05.631755586Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 19 12:07:05.633786 containerd[1618]: time="2026-01-19T12:07:05.631768520Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 19 12:07:05.633786 containerd[1618]: time="2026-01-19T12:07:05.631781094Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 19 12:07:05.633786 containerd[1618]: time="2026-01-19T12:07:05.631794148Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 19 12:07:05.633786 containerd[1618]: time="2026-01-19T12:07:05.631810408Z" level=info msg="runtime interface created" Jan 19 12:07:05.633786 containerd[1618]: time="2026-01-19T12:07:05.631817732Z" level=info msg="created NRI interface" Jan 19 12:07:05.633786 containerd[1618]: time="2026-01-19T12:07:05.631836207Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 19 12:07:05.633786 containerd[1618]: time="2026-01-19T12:07:05.631853248Z" level=info msg="Connect containerd service" Jan 19 12:07:05.633786 containerd[1618]: time="2026-01-19T12:07:05.631886060Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 19 12:07:05.682646 containerd[1618]: time="2026-01-19T12:07:05.675899526Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 19 12:07:06.252626 tar[1616]: linux-amd64/README.md Jan 19 12:07:06.428838 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 19 12:07:06.492621 containerd[1618]: time="2026-01-19T12:07:06.490682215Z" level=info msg="Start subscribing containerd event" Jan 19 12:07:06.503888 containerd[1618]: time="2026-01-19T12:07:06.499717803Z" level=info msg="Start recovering state" Jan 19 12:07:06.503888 containerd[1618]: time="2026-01-19T12:07:06.500005110Z" level=info msg="Start event monitor" Jan 19 12:07:06.503888 containerd[1618]: time="2026-01-19T12:07:06.500022352Z" level=info msg="Start cni network conf syncer for default" Jan 19 12:07:06.503888 containerd[1618]: time="2026-01-19T12:07:06.500032301Z" level=info msg="Start streaming server" Jan 19 12:07:06.503888 containerd[1618]: time="2026-01-19T12:07:06.500044323Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 19 12:07:06.503888 containerd[1618]: time="2026-01-19T12:07:06.500054452Z" level=info msg="runtime interface starting up..." Jan 19 12:07:06.503888 containerd[1618]: time="2026-01-19T12:07:06.500062477Z" level=info msg="starting plugins..." Jan 19 12:07:06.503888 containerd[1618]: time="2026-01-19T12:07:06.500078698Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 19 12:07:06.503888 containerd[1618]: time="2026-01-19T12:07:06.503747620Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 19 12:07:06.503888 containerd[1618]: time="2026-01-19T12:07:06.503828792Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 19 12:07:06.506991 systemd[1]: Started containerd.service - containerd container runtime. Jan 19 12:07:06.551783 containerd[1618]: time="2026-01-19T12:07:06.507745501Z" level=info msg="containerd successfully booted in 1.040720s" Jan 19 12:07:06.953686 kernel: hrtimer: interrupt took 2698338 ns Jan 19 12:07:09.761004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 12:07:09.788008 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 19 12:07:09.809920 systemd[1]: Startup finished in 8.090s (kernel) + 22.741s (initrd) + 21.468s (userspace) = 52.300s. Jan 19 12:07:09.812880 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 19 12:07:11.686757 kubelet[1706]: E0119 12:07:11.686392 1706 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 19 12:07:11.694945 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 19 12:07:11.695906 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 19 12:07:11.696874 systemd[1]: kubelet.service: Consumed 2.076s CPU time, 259M memory peak. Jan 19 12:07:11.817996 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 19 12:07:11.823930 systemd[1]: Started sshd@0-10.0.0.55:22-10.0.0.1:58038.service - OpenSSH per-connection server daemon (10.0.0.1:58038). Jan 19 12:07:12.262041 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 58038 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:07:12.277055 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:07:12.325687 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 19 12:07:12.331712 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 19 12:07:12.356852 systemd-logind[1598]: New session 1 of user core. Jan 19 12:07:12.431979 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 19 12:07:12.443677 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 19 12:07:12.512981 (systemd)[1726]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:07:12.531384 systemd-logind[1598]: New session 2 of user core. Jan 19 12:07:12.918085 systemd[1726]: Queued start job for default target default.target. Jan 19 12:07:12.936759 systemd[1726]: Created slice app.slice - User Application Slice. Jan 19 12:07:12.936963 systemd[1726]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 19 12:07:12.936987 systemd[1726]: Reached target paths.target - Paths. Jan 19 12:07:12.937822 systemd[1726]: Reached target timers.target - Timers. Jan 19 12:07:12.943987 systemd[1726]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 19 12:07:12.948751 systemd[1726]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 19 12:07:13.028748 systemd[1726]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 19 12:07:13.029958 systemd[1726]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 19 12:07:13.037772 systemd[1726]: Reached target sockets.target - Sockets. Jan 19 12:07:13.038505 systemd[1726]: Reached target basic.target - Basic System. Jan 19 12:07:13.038713 systemd[1726]: Reached target default.target - Main User Target. Jan 19 12:07:13.038764 systemd[1726]: Startup finished in 482ms. Jan 19 12:07:13.039054 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 19 12:07:13.058897 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 19 12:07:13.111052 systemd[1]: Started sshd@1-10.0.0.55:22-10.0.0.1:58722.service - OpenSSH per-connection server daemon (10.0.0.1:58722). Jan 19 12:07:13.367804 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 58722 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:07:13.377489 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:07:13.407890 systemd-logind[1598]: New session 3 of user core. Jan 19 12:07:13.419033 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 19 12:07:13.510944 sshd[1744]: Connection closed by 10.0.0.1 port 58722 Jan 19 12:07:13.512052 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Jan 19 12:07:13.534062 systemd[1]: sshd@1-10.0.0.55:22-10.0.0.1:58722.service: Deactivated successfully. Jan 19 12:07:13.542010 systemd[1]: session-3.scope: Deactivated successfully. Jan 19 12:07:13.551911 systemd-logind[1598]: Session 3 logged out. Waiting for processes to exit. Jan 19 12:07:13.557084 systemd[1]: Started sshd@2-10.0.0.55:22-10.0.0.1:58730.service - OpenSSH per-connection server daemon (10.0.0.1:58730). Jan 19 12:07:13.562946 systemd-logind[1598]: Removed session 3. Jan 19 12:07:13.777028 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 58730 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:07:13.782801 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:07:13.809917 systemd-logind[1598]: New session 4 of user core. Jan 19 12:07:13.828988 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 19 12:07:13.897738 sshd[1754]: Connection closed by 10.0.0.1 port 58730 Jan 19 12:07:13.897682 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Jan 19 12:07:13.913628 systemd[1]: sshd@2-10.0.0.55:22-10.0.0.1:58730.service: Deactivated successfully. Jan 19 12:07:13.918778 systemd[1]: session-4.scope: Deactivated successfully. Jan 19 12:07:13.928552 systemd-logind[1598]: Session 4 logged out. Waiting for processes to exit. Jan 19 12:07:13.932768 systemd[1]: Started sshd@3-10.0.0.55:22-10.0.0.1:58736.service - OpenSSH per-connection server daemon (10.0.0.1:58736). Jan 19 12:07:13.939088 systemd-logind[1598]: Removed session 4. Jan 19 12:07:14.110075 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 58736 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:07:14.116731 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:07:14.145038 systemd-logind[1598]: New session 5 of user core. Jan 19 12:07:14.165664 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 19 12:07:14.238922 sshd[1764]: Connection closed by 10.0.0.1 port 58736 Jan 19 12:07:14.239847 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Jan 19 12:07:14.256690 systemd[1]: Started sshd@4-10.0.0.55:22-10.0.0.1:58744.service - OpenSSH per-connection server daemon (10.0.0.1:58744). Jan 19 12:07:14.257966 systemd[1]: sshd@3-10.0.0.55:22-10.0.0.1:58736.service: Deactivated successfully. Jan 19 12:07:14.264855 systemd[1]: session-5.scope: Deactivated successfully. Jan 19 12:07:14.272687 systemd-logind[1598]: Session 5 logged out. Waiting for processes to exit. Jan 19 12:07:14.280809 systemd-logind[1598]: Removed session 5. Jan 19 12:07:14.481028 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 58744 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:07:14.486883 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:07:14.513840 systemd-logind[1598]: New session 6 of user core. Jan 19 12:07:14.535999 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 19 12:07:14.663740 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 19 12:07:14.665018 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 19 12:07:14.702771 sudo[1775]: pam_unix(sudo:session): session closed for user root Jan 19 12:07:14.709016 sshd[1774]: Connection closed by 10.0.0.1 port 58744 Jan 19 12:07:14.710809 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Jan 19 12:07:14.726906 systemd[1]: Started sshd@5-10.0.0.55:22-10.0.0.1:58758.service - OpenSSH per-connection server daemon (10.0.0.1:58758). Jan 19 12:07:14.728759 systemd[1]: sshd@4-10.0.0.55:22-10.0.0.1:58744.service: Deactivated successfully. Jan 19 12:07:14.734906 systemd[1]: session-6.scope: Deactivated successfully. Jan 19 12:07:14.738425 systemd-logind[1598]: Session 6 logged out. Waiting for processes to exit. Jan 19 12:07:14.745902 systemd-logind[1598]: Removed session 6. Jan 19 12:07:14.939966 sshd[1779]: Accepted publickey for core from 10.0.0.1 port 58758 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:07:14.943899 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:07:14.975677 systemd-logind[1598]: New session 7 of user core. Jan 19 12:07:15.004816 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 19 12:07:15.089947 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 19 12:07:15.091744 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 19 12:07:15.112539 sudo[1788]: pam_unix(sudo:session): session closed for user root Jan 19 12:07:15.160012 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 19 12:07:15.161926 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 19 12:07:15.210055 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 19 12:07:15.515000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 19 12:07:15.517023 augenrules[1812]: No rules Jan 19 12:07:15.521437 systemd[1]: audit-rules.service: Deactivated successfully. Jan 19 12:07:15.521898 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 19 12:07:15.525744 sudo[1787]: pam_unix(sudo:session): session closed for user root Jan 19 12:07:15.531875 sshd[1786]: Connection closed by 10.0.0.1 port 58758 Jan 19 12:07:15.534510 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Jan 19 12:07:15.549801 kernel: audit: type=1305 audit(1768824435.515:223): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 19 12:07:15.549869 kernel: audit: type=1300 audit(1768824435.515:223): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe1492ca90 a2=420 a3=0 items=0 ppid=1793 pid=1812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:15.515000 audit[1812]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe1492ca90 a2=420 a3=0 items=0 ppid=1793 pid=1812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:15.613934 kernel: audit: type=1327 audit(1768824435.515:223): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 19 12:07:15.515000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 19 12:07:15.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:07:15.689851 kernel: audit: type=1130 audit(1768824435.521:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:07:15.692060 kernel: audit: type=1131 audit(1768824435.521:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:07:15.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:07:15.699938 systemd[1]: sshd@5-10.0.0.55:22-10.0.0.1:58758.service: Deactivated successfully. Jan 19 12:07:15.707991 systemd[1]: session-7.scope: Deactivated successfully. Jan 19 12:07:15.714080 systemd-logind[1598]: Session 7 logged out. Waiting for processes to exit. Jan 19 12:07:15.719852 systemd[1]: Started sshd@6-10.0.0.55:22-10.0.0.1:58770.service - OpenSSH per-connection server daemon (10.0.0.1:58770). Jan 19 12:07:15.725049 systemd-logind[1598]: Removed session 7. Jan 19 12:07:15.731838 kernel: audit: type=1106 audit(1768824435.524:226): pid=1787 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 12:07:15.524000 audit[1787]: USER_END pid=1787 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 12:07:15.524000 audit[1787]: CRED_DISP pid=1787 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 12:07:15.783841 kernel: audit: type=1104 audit(1768824435.524:227): pid=1787 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 12:07:15.540000 audit[1779]: USER_END pid=1779 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:07:15.917848 kernel: audit: type=1106 audit(1768824435.540:228): pid=1779 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:07:15.917945 kernel: audit: type=1104 audit(1768824435.540:229): pid=1779 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:07:15.540000 audit[1779]: CRED_DISP pid=1779 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:07:15.928983 sshd[1821]: Accepted publickey for core from 10.0.0.1 port 58770 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:07:15.933713 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:07:15.957608 systemd-logind[1598]: New session 8 of user core. Jan 19 12:07:15.960804 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 19 12:07:15.975747 kernel: audit: type=1131 audit(1768824435.701:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.55:22-10.0.0.1:58758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:07:15.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.55:22-10.0.0.1:58758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:07:15.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.55:22-10.0.0.1:58770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:07:15.926000 audit[1821]: USER_ACCT pid=1821 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:07:15.930000 audit[1821]: CRED_ACQ pid=1821 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:07:15.930000 audit[1821]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdca1191c0 a2=3 a3=0 items=0 ppid=1 pid=1821 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:15.930000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:07:15.975000 audit[1821]: USER_START pid=1821 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:07:15.981000 audit[1825]: CRED_ACQ pid=1825 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:07:16.042000 audit[1826]: USER_ACCT pid=1826 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 12:07:16.044787 sudo[1826]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 19 12:07:16.043000 audit[1826]: CRED_REFR pid=1826 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 12:07:16.046037 sudo[1826]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 19 12:07:16.046000 audit[1826]: USER_START pid=1826 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 12:07:17.313700 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 19 12:07:17.350804 (dockerd)[1847]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 19 12:07:18.475974 dockerd[1847]: time="2026-01-19T12:07:18.474854062Z" level=info msg="Starting up" Jan 19 12:07:18.486884 dockerd[1847]: time="2026-01-19T12:07:18.485811339Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 19 12:07:18.567914 dockerd[1847]: time="2026-01-19T12:07:18.567870087Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 19 12:07:18.787741 dockerd[1847]: time="2026-01-19T12:07:18.786669673Z" level=info msg="Loading containers: start." Jan 19 12:07:18.846806 kernel: Initializing XFRM netlink socket Jan 19 12:07:19.395000 audit[1900]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1900 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:07:19.395000 audit[1900]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffccebc6bc0 a2=0 a3=0 items=0 ppid=1847 pid=1900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:19.395000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 19 12:07:19.426000 audit[1902]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1902 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:07:19.426000 audit[1902]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffecdc92910 a2=0 a3=0 items=0 ppid=1847 pid=1902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:19.426000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 19 12:07:19.460000 audit[1904]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1904 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:07:19.460000 audit[1904]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe86834d10 a2=0 a3=0 items=0 ppid=1847 pid=1904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:19.460000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 19 12:07:19.485000 audit[1906]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1906 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:07:19.485000 audit[1906]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff601a6a50 a2=0 a3=0 items=0 ppid=1847 pid=1906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:19.485000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 19 12:07:19.515000 audit[1908]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1908 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:07:19.515000 audit[1908]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffe1421a40 a2=0 a3=0 items=0 ppid=1847 pid=1908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:19.515000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 19 12:07:19.543000 audit[1910]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:07:19.543000 audit[1910]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffce254dc70 a2=0 a3=0 items=0 ppid=1847 pid=1910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:19.543000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 19 12:07:19.579000 audit[1912]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1912 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:07:19.579000 audit[1912]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff4963efe0 a2=0 a3=0 items=0 ppid=1847 pid=1912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:19.579000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 19 12:07:19.611000 audit[1914]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:07:19.611000 audit[1914]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd3d308130 a2=0 a3=0 items=0 ppid=1847 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:19.611000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 19 12:07:19.790000 audit[1917]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1917 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:07:19.790000 audit[1917]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fffdfe727a0 a2=0 a3=0 items=0 ppid=1847 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:19.790000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 19 12:07:19.826000 audit[1919]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:07:19.826000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffed1eb86d0 a2=0 a3=0 items=0 ppid=1847 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:19.826000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 19 12:07:19.855000 audit[1921]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:07:19.855000 audit[1921]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc867edb00 a2=0 a3=0 items=0 ppid=1847 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:19.855000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 19 12:07:19.879000 audit[1923]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:07:19.879000 audit[1923]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe4b169090 a2=0 a3=0 items=0 ppid=1847 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:19.879000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 19 12:07:19.917000 audit[1925]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:07:19.917000 audit[1925]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffeb929bc60 a2=0 a3=0 items=0 ppid=1847 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:19.917000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 19 12:07:20.461000 audit[1955]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1955 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:07:20.461000 audit[1955]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffc9f18d40 a2=0 a3=0 items=0 ppid=1847 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:20.461000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 19 12:07:20.504000 audit[1957]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:07:20.504000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff030ffd40 a2=0 a3=0 items=0 ppid=1847 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:20.504000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 19 12:07:20.535000 audit[1959]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1959 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:07:20.549809 kernel: kauditd_printk_skb: 56 callbacks suppressed Jan 19 12:07:20.549871 kernel: audit: type=1325 audit(1768824440.535:255): table=filter:17 family=10 entries=1 op=nft_register_chain pid=1959 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:07:20.535000 audit[1959]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3cbd3aa0 a2=0 a3=0 items=0 ppid=1847 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:20.653766 kernel: audit: type=1300 audit(1768824440.535:255): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3cbd3aa0 a2=0 a3=0 items=0 ppid=1847 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:20.653867 kernel: audit: type=1327 audit(1768824440.535:255): proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 19 12:07:20.535000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 19 12:07:20.686847 kernel: audit: type=1325 audit(1768824440.563:256): table=filter:18 family=10 entries=1 op=nft_register_chain pid=1961 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:07:20.563000 audit[1961]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1961 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:07:20.563000 audit[1961]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8313b8e0 a2=0 a3=0 items=0 ppid=1847 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:20.789841 kernel: audit: type=1300 audit(1768824440.563:256): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8313b8e0 a2=0 a3=0 items=0 ppid=1847 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:20.563000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 19 12:07:20.588000 audit[1963]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1963 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:07:20.859835 kernel: audit: type=1327 audit(1768824440.563:256): proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 19 12:07:20.859932 kernel: audit: type=1325 audit(1768824440.588:257): table=filter:19 family=10 entries=1 op=nft_register_chain pid=1963 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:07:20.859963 kernel: audit: type=1300 audit(1768824440.588:257): arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd95b06fd0 a2=0 a3=0 items=0 ppid=1847 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:20.588000 audit[1963]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd95b06fd0 a2=0 a3=0 items=0 ppid=1847 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:20.924842 kernel: audit: type=1327 audit(1768824440.588:257): proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 19 12:07:20.588000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 19 12:07:20.956754 kernel: audit: type=1325 audit(1768824440.623:258): table=filter:20 family=10 entries=1 op=nft_register_chain pid=1965 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:07:20.623000 audit[1965]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1965 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:07:20.623000 audit[1965]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffe9159b20 a2=0 a3=0 items=0 ppid=1847 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:20.623000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 19 12:07:20.651000 audit[1967]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1967 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:07:20.651000 audit[1967]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcaa6153c0 a2=0 a3=0 items=0 ppid=1847 pid=1967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:20.651000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 19 12:07:20.690000 audit[1969]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1969 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:07:20.690000 audit[1969]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe2142dbb0 a2=0 a3=0 items=0 ppid=1847 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:20.690000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 19 12:07:20.817000 audit[1971]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1971 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:07:20.817000 audit[1971]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7fffa5b11870 a2=0 a3=0 items=0 ppid=1847 pid=1971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:20.817000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 19 12:07:20.840000 audit[1973]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1973 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:07:20.840000 audit[1973]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffddbd7a670 a2=0 a3=0 items=0 ppid=1847 pid=1973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:20.840000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 19 12:07:20.863000 audit[1975]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1975 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:07:20.863000 audit[1975]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff0250bea0 a2=0 a3=0 items=0 ppid=1847 pid=1975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:20.863000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 19 12:07:20.898000 audit[1977]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1977 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:07:20.898000 audit[1977]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe5ab1b7d0 a2=0 a3=0 items=0 ppid=1847 pid=1977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:20.898000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 19 12:07:20.933000 audit[1979]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1979 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:07:20.933000 audit[1979]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff1e0baaa0 a2=0 a3=0 items=0 ppid=1847 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:20.933000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 19 12:07:21.016000 audit[1984]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1984 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:07:21.016000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcd70a3dd0 a2=0 a3=0 items=0 ppid=1847 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:21.016000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 19 12:07:21.049000 audit[1986]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1986 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:07:21.049000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc2aca5f30 a2=0 a3=0 items=0 ppid=1847 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:21.049000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 19 12:07:21.078000 audit[1988]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1988 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:07:21.078000 audit[1988]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe91430090 a2=0 a3=0 items=0 ppid=1847 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:21.078000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 19 12:07:21.109000 audit[1990]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:07:21.109000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffb5662130 a2=0 a3=0 items=0 ppid=1847 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:21.109000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 19 12:07:21.141000 audit[1992]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:07:21.141000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdabb54ff0 a2=0 a3=0 items=0 ppid=1847 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:21.141000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 19 12:07:21.180000 audit[1994]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=1994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:07:21.180000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc593b3770 a2=0 a3=0 items=0 ppid=1847 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:21.180000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 19 12:07:21.295000 audit[1999]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=1999 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:07:21.295000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fffab800f60 a2=0 a3=0 items=0 ppid=1847 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:21.295000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 19 12:07:21.333000 audit[2001]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2001 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:07:21.333000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe20a9dce0 a2=0 a3=0 items=0 ppid=1847 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:21.333000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 19 12:07:21.485000 audit[2009]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:07:21.485000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7fff5b317b20 a2=0 a3=0 items=0 ppid=1847 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:21.485000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 19 12:07:21.647000 audit[2015]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:07:21.647000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd44ed93a0 a2=0 a3=0 items=0 ppid=1847 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:21.647000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 19 12:07:21.691000 audit[2017]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:07:21.691000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffe9a233cf0 a2=0 a3=0 items=0 ppid=1847 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:21.691000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 19 12:07:21.729000 audit[2019]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:07:21.729000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd2856b1c0 a2=0 a3=0 items=0 ppid=1847 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:21.729000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 19 12:07:21.758000 audit[2021]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:07:21.758000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffef1d75430 a2=0 a3=0 items=0 ppid=1847 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:21.758000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 19 12:07:21.793771 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 19 12:07:21.799000 audit[2023]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:07:21.799000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc28bd26b0 a2=0 a3=0 items=0 ppid=1847 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:07:21.799000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 19 12:07:21.805777 systemd-networkd[1519]: docker0: Link UP Jan 19 12:07:21.805822 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 12:07:21.845906 dockerd[1847]: time="2026-01-19T12:07:21.844917579Z" level=info msg="Loading containers: done." Jan 19 12:07:21.968937 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck825626095-merged.mount: Deactivated successfully. Jan 19 12:07:21.996733 dockerd[1847]: time="2026-01-19T12:07:21.996048288Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 19 12:07:21.996733 dockerd[1847]: time="2026-01-19T12:07:21.996692130Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 19 12:07:21.996938 dockerd[1847]: time="2026-01-19T12:07:21.996815270Z" level=info msg="Initializing buildkit" Jan 19 12:07:22.495051 dockerd[1847]: time="2026-01-19T12:07:22.494940679Z" level=info msg="Completed buildkit initialization" Jan 19 12:07:22.514030 dockerd[1847]: time="2026-01-19T12:07:22.513733298Z" level=info msg="Daemon has completed initialization" Jan 19 12:07:22.514030 dockerd[1847]: time="2026-01-19T12:07:22.513942358Z" level=info msg="API listen on /run/docker.sock" Jan 19 12:07:22.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:07:22.516835 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 19 12:07:22.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:07:22.531883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 12:07:22.586003 (kubelet)[2057]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 19 12:07:22.944086 kubelet[2057]: E0119 12:07:22.944030 2057 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 19 12:07:22.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 19 12:07:22.955008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 19 12:07:22.955776 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 19 12:07:22.956987 systemd[1]: kubelet.service: Consumed 859ms CPU time, 110.7M memory peak. Jan 19 12:07:24.917066 containerd[1618]: time="2026-01-19T12:07:24.916877430Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 19 12:07:26.552893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount761048690.mount: Deactivated successfully. Jan 19 12:07:33.043675 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 19 12:07:33.049755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 12:07:33.249962 containerd[1618]: time="2026-01-19T12:07:33.249919154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:07:33.270988 containerd[1618]: time="2026-01-19T12:07:33.267076690Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=26924660" Jan 19 12:07:33.289470 containerd[1618]: time="2026-01-19T12:07:33.288965591Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:07:33.394739 containerd[1618]: time="2026-01-19T12:07:33.394059109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:07:33.398766 containerd[1618]: time="2026-01-19T12:07:33.398730871Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 8.478695883s" Jan 19 12:07:33.399645 containerd[1618]: time="2026-01-19T12:07:33.398857828Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 19 12:07:33.401664 containerd[1618]: time="2026-01-19T12:07:33.400722324Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 19 12:07:33.616721 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 12:07:33.634868 kernel: kauditd_printk_skb: 68 callbacks suppressed Jan 19 12:07:33.635020 kernel: audit: type=1130 audit(1768824453.616:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:07:33.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:07:33.720087 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 19 12:07:34.208821 kubelet[2149]: E0119 12:07:34.207885 2149 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 19 12:07:34.218916 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 19 12:07:34.219867 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 19 12:07:34.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 19 12:07:34.222965 systemd[1]: kubelet.service: Consumed 919ms CPU time, 110.6M memory peak. Jan 19 12:07:34.270032 kernel: audit: type=1131 audit(1768824454.222:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 19 12:07:38.855831 containerd[1618]: time="2026-01-19T12:07:38.855774218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:07:38.859898 containerd[1618]: time="2026-01-19T12:07:38.857748473Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Jan 19 12:07:38.863825 containerd[1618]: time="2026-01-19T12:07:38.863766992Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:07:38.880651 containerd[1618]: time="2026-01-19T12:07:38.880615027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:07:38.883809 containerd[1618]: time="2026-01-19T12:07:38.882993095Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 5.482241356s" Jan 19 12:07:38.883809 containerd[1618]: time="2026-01-19T12:07:38.883706472Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 19 12:07:38.888738 containerd[1618]: time="2026-01-19T12:07:38.887880017Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 19 12:07:42.688851 containerd[1618]: time="2026-01-19T12:07:42.688806620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:07:42.697811 containerd[1618]: time="2026-01-19T12:07:42.696596768Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15717792" Jan 19 12:07:42.701838 containerd[1618]: time="2026-01-19T12:07:42.701793632Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:07:42.710583 containerd[1618]: time="2026-01-19T12:07:42.710552039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:07:42.711754 containerd[1618]: time="2026-01-19T12:07:42.711688075Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 3.821949678s" Jan 19 12:07:42.712645 containerd[1618]: time="2026-01-19T12:07:42.711899986Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 19 12:07:42.714987 containerd[1618]: time="2026-01-19T12:07:42.714960060Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 19 12:07:44.293026 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 19 12:07:44.299906 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 12:07:44.794624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 12:07:44.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:07:44.850589 kernel: audit: type=1130 audit(1768824464.793:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:07:44.879630 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 19 12:07:45.129847 kubelet[2178]: E0119 12:07:45.127618 2178 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 19 12:07:45.136843 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 19 12:07:45.140857 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 19 12:07:45.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 19 12:07:45.142755 systemd[1]: kubelet.service: Consumed 650ms CPU time, 110.5M memory peak. Jan 19 12:07:45.199600 kernel: audit: type=1131 audit(1768824465.140:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 19 12:07:45.484696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount192100745.mount: Deactivated successfully. Jan 19 12:07:48.066754 containerd[1618]: time="2026-01-19T12:07:48.066714477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:07:48.071802 containerd[1618]: time="2026-01-19T12:07:48.071607516Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=0" Jan 19 12:07:48.077532 containerd[1618]: time="2026-01-19T12:07:48.077482019Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:07:48.087615 containerd[1618]: time="2026-01-19T12:07:48.087560243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:07:48.088732 containerd[1618]: time="2026-01-19T12:07:48.088610574Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 5.373183849s" Jan 19 12:07:48.088732 containerd[1618]: time="2026-01-19T12:07:48.088646530Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 19 12:07:48.091696 containerd[1618]: time="2026-01-19T12:07:48.091661877Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 19 12:07:49.417458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3988731546.mount: Deactivated successfully. Jan 19 12:07:50.542591 update_engine[1599]: I20260119 12:07:50.541698 1599 update_attempter.cc:509] Updating boot flags... Jan 19 12:07:55.244635 containerd[1618]: time="2026-01-19T12:07:55.243797269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:07:55.249779 containerd[1618]: time="2026-01-19T12:07:55.249729856Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22376580" Jan 19 12:07:55.257839 containerd[1618]: time="2026-01-19T12:07:55.254476121Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:07:55.266539 containerd[1618]: time="2026-01-19T12:07:55.266491783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:07:55.268489 containerd[1618]: time="2026-01-19T12:07:55.267672048Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 7.175883295s" Jan 19 12:07:55.268489 containerd[1618]: time="2026-01-19T12:07:55.267867491Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 19 12:07:55.269866 containerd[1618]: time="2026-01-19T12:07:55.269532178Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 19 12:07:55.293671 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 19 12:07:55.300482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 12:07:55.949637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 12:07:55.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:07:56.005713 kernel: audit: type=1130 audit(1768824475.952:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:07:56.015936 (kubelet)[2267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 19 12:07:56.131652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3600697746.mount: Deactivated successfully. Jan 19 12:07:56.180754 containerd[1618]: time="2026-01-19T12:07:56.180715468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:07:56.195687 containerd[1618]: time="2026-01-19T12:07:56.189812659Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 19 12:07:56.197451 containerd[1618]: time="2026-01-19T12:07:56.196706895Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:07:56.204923 containerd[1618]: time="2026-01-19T12:07:56.204706817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:07:56.207500 containerd[1618]: time="2026-01-19T12:07:56.205941800Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 936.384765ms" Jan 19 12:07:56.209557 containerd[1618]: time="2026-01-19T12:07:56.208932834Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 19 12:07:56.212679 containerd[1618]: time="2026-01-19T12:07:56.211895796Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 19 12:07:56.460649 kubelet[2267]: E0119 12:07:56.458691 2267 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 19 12:07:56.476841 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 19 12:07:56.477915 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 19 12:07:56.480716 systemd[1]: kubelet.service: Consumed 950ms CPU time, 110.4M memory peak. Jan 19 12:07:56.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 19 12:07:56.540399 kernel: audit: type=1131 audit(1768824476.480:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 19 12:07:57.395602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3102829095.mount: Deactivated successfully. Jan 19 12:08:06.550674 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 19 12:08:06.576716 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 12:08:07.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:08:07.667444 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 12:08:07.730576 kernel: audit: type=1130 audit(1768824487.667:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:08:07.745862 (kubelet)[2341]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 19 12:08:08.156525 kubelet[2341]: E0119 12:08:08.156445 2341 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 19 12:08:08.164465 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 19 12:08:08.164806 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 19 12:08:08.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 19 12:08:08.166677 systemd[1]: kubelet.service: Consumed 927ms CPU time, 110.7M memory peak. Jan 19 12:08:08.229666 kernel: audit: type=1131 audit(1768824488.166:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 19 12:08:11.660834 containerd[1618]: time="2026-01-19T12:08:11.659876406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:08:11.663723 containerd[1618]: time="2026-01-19T12:08:11.663582556Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73609123" Jan 19 12:08:11.667855 containerd[1618]: time="2026-01-19T12:08:11.667790221Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:08:11.675776 containerd[1618]: time="2026-01-19T12:08:11.675721853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:08:11.679712 containerd[1618]: time="2026-01-19T12:08:11.677714594Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 15.465791116s" Jan 19 12:08:11.679712 containerd[1618]: time="2026-01-19T12:08:11.677750050Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 19 12:08:15.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:08:15.398484 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 12:08:15.399283 systemd[1]: kubelet.service: Consumed 927ms CPU time, 110.7M memory peak. Jan 19 12:08:15.404337 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 12:08:15.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:08:15.428576 kernel: audit: type=1130 audit(1768824495.398:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:08:15.428645 kernel: audit: type=1131 audit(1768824495.398:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:08:15.478711 systemd[1]: Reload requested from client PID 2381 ('systemctl') (unit session-8.scope)... Jan 19 12:08:15.478823 systemd[1]: Reloading... Jan 19 12:08:15.653631 zram_generator::config[2428]: No configuration found. Jan 19 12:08:16.000510 systemd[1]: Reloading finished in 520 ms. Jan 19 12:08:16.063000 audit: BPF prog-id=63 op=LOAD Jan 19 12:08:16.063000 audit: BPF prog-id=52 op=UNLOAD Jan 19 12:08:16.063000 audit: BPF prog-id=64 op=LOAD Jan 19 12:08:16.064000 audit: BPF prog-id=65 op=LOAD Jan 19 12:08:16.064000 audit: BPF prog-id=53 op=UNLOAD Jan 19 12:08:16.064000 audit: BPF prog-id=54 op=UNLOAD Jan 19 12:08:16.064000 audit: BPF prog-id=66 op=LOAD Jan 19 12:08:16.064000 audit: BPF prog-id=67 op=LOAD Jan 19 12:08:16.064000 audit: BPF prog-id=46 op=UNLOAD Jan 19 12:08:16.064000 audit: BPF prog-id=47 op=UNLOAD Jan 19 12:08:16.066000 audit: BPF prog-id=68 op=LOAD Jan 19 12:08:16.066000 audit: BPF prog-id=55 op=UNLOAD Jan 19 12:08:16.066000 audit: BPF prog-id=69 op=LOAD Jan 19 12:08:16.066000 audit: BPF prog-id=70 op=LOAD Jan 19 12:08:16.066000 audit: BPF prog-id=56 op=UNLOAD Jan 19 12:08:16.075332 kernel: audit: type=1334 audit(1768824496.063:293): prog-id=63 op=LOAD Jan 19 12:08:16.075377 kernel: audit: type=1334 audit(1768824496.063:294): prog-id=52 op=UNLOAD Jan 19 12:08:16.075399 kernel: audit: type=1334 audit(1768824496.063:295): prog-id=64 op=LOAD Jan 19 12:08:16.075419 kernel: audit: type=1334 audit(1768824496.064:296): prog-id=65 op=LOAD Jan 19 12:08:16.075434 kernel: audit: type=1334 audit(1768824496.064:297): prog-id=53 op=UNLOAD Jan 19 12:08:16.075449 kernel: audit: type=1334 audit(1768824496.064:298): prog-id=54 op=UNLOAD Jan 19 12:08:16.075468 kernel: audit: type=1334 audit(1768824496.064:299): prog-id=66 op=LOAD Jan 19 12:08:16.075486 kernel: audit: type=1334 audit(1768824496.064:300): prog-id=67 op=LOAD Jan 19 12:08:16.066000 audit: BPF prog-id=57 op=UNLOAD Jan 19 12:08:16.074000 audit: BPF prog-id=71 op=LOAD Jan 19 12:08:16.074000 audit: BPF prog-id=60 op=UNLOAD Jan 19 12:08:16.074000 audit: BPF prog-id=72 op=LOAD Jan 19 12:08:16.076000 audit: BPF prog-id=73 op=LOAD Jan 19 12:08:16.076000 audit: BPF prog-id=61 op=UNLOAD Jan 19 12:08:16.076000 audit: BPF prog-id=62 op=UNLOAD Jan 19 12:08:16.077000 audit: BPF prog-id=74 op=LOAD Jan 19 12:08:16.077000 audit: BPF prog-id=58 op=UNLOAD Jan 19 12:08:16.079000 audit: BPF prog-id=75 op=LOAD Jan 19 12:08:16.079000 audit: BPF prog-id=59 op=UNLOAD Jan 19 12:08:16.080000 audit: BPF prog-id=76 op=LOAD Jan 19 12:08:16.080000 audit: BPF prog-id=43 op=UNLOAD Jan 19 12:08:16.080000 audit: BPF prog-id=77 op=LOAD Jan 19 12:08:16.080000 audit: BPF prog-id=78 op=LOAD Jan 19 12:08:16.080000 audit: BPF prog-id=44 op=UNLOAD Jan 19 12:08:16.080000 audit: BPF prog-id=45 op=UNLOAD Jan 19 12:08:16.082000 audit: BPF prog-id=79 op=LOAD Jan 19 12:08:16.082000 audit: BPF prog-id=48 op=UNLOAD Jan 19 12:08:16.084000 audit: BPF prog-id=80 op=LOAD Jan 19 12:08:16.085000 audit: BPF prog-id=49 op=UNLOAD Jan 19 12:08:16.085000 audit: BPF prog-id=81 op=LOAD Jan 19 12:08:16.085000 audit: BPF prog-id=82 op=LOAD Jan 19 12:08:16.085000 audit: BPF prog-id=50 op=UNLOAD Jan 19 12:08:16.085000 audit: BPF prog-id=51 op=UNLOAD Jan 19 12:08:16.149861 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 19 12:08:16.150348 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 19 12:08:16.151021 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 12:08:16.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 19 12:08:16.151404 systemd[1]: kubelet.service: Consumed 301ms CPU time, 98.3M memory peak. Jan 19 12:08:16.155043 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 12:08:16.512747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 12:08:16.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:08:16.533642 (kubelet)[2474]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 19 12:08:16.732291 kubelet[2474]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 19 12:08:16.732699 kubelet[2474]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 19 12:08:16.732699 kubelet[2474]: I0119 12:08:16.732483 2474 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 19 12:08:17.300836 kubelet[2474]: I0119 12:08:17.300682 2474 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 19 12:08:17.300836 kubelet[2474]: I0119 12:08:17.300805 2474 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 19 12:08:17.304443 kubelet[2474]: I0119 12:08:17.303875 2474 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 19 12:08:17.304443 kubelet[2474]: I0119 12:08:17.304077 2474 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 19 12:08:17.308733 kubelet[2474]: I0119 12:08:17.308374 2474 server.go:956] "Client rotation is on, will bootstrap in background" Jan 19 12:08:17.347029 kubelet[2474]: E0119 12:08:17.346484 2474 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 19 12:08:17.350800 kubelet[2474]: I0119 12:08:17.350662 2474 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 19 12:08:17.360649 kubelet[2474]: I0119 12:08:17.360629 2474 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 19 12:08:17.376657 kubelet[2474]: I0119 12:08:17.375405 2474 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 19 12:08:17.378189 kubelet[2474]: I0119 12:08:17.377486 2474 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 19 12:08:17.378664 kubelet[2474]: I0119 12:08:17.378023 2474 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 19 12:08:17.378664 kubelet[2474]: I0119 12:08:17.378577 2474 topology_manager.go:138] "Creating topology manager with none policy" Jan 19 12:08:17.378664 kubelet[2474]: I0119 12:08:17.378587 2474 container_manager_linux.go:306] "Creating device plugin manager" Jan 19 12:08:17.379078 kubelet[2474]: I0119 12:08:17.378677 2474 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 19 12:08:17.386832 kubelet[2474]: I0119 12:08:17.386806 2474 state_mem.go:36] "Initialized new in-memory state store" Jan 19 12:08:17.387519 kubelet[2474]: I0119 12:08:17.387503 2474 kubelet.go:475] "Attempting to sync node with API server" Jan 19 12:08:17.387570 kubelet[2474]: I0119 12:08:17.387524 2474 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 19 12:08:17.388109 kubelet[2474]: I0119 12:08:17.388057 2474 kubelet.go:387] "Adding apiserver pod source" Jan 19 12:08:17.391732 kubelet[2474]: I0119 12:08:17.391221 2474 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 19 12:08:17.396260 kubelet[2474]: E0119 12:08:17.395710 2474 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 19 12:08:17.402620 kubelet[2474]: I0119 12:08:17.402598 2474 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 19 12:08:17.410059 kubelet[2474]: E0119 12:08:17.409804 2474 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 19 12:08:17.410835 kubelet[2474]: I0119 12:08:17.410618 2474 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 19 12:08:17.414269 kubelet[2474]: I0119 12:08:17.411794 2474 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 19 12:08:17.414269 kubelet[2474]: W0119 12:08:17.411859 2474 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 19 12:08:17.428359 kubelet[2474]: I0119 12:08:17.427816 2474 server.go:1262] "Started kubelet" Jan 19 12:08:17.428684 kubelet[2474]: I0119 12:08:17.428459 2474 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 19 12:08:17.428910 kubelet[2474]: I0119 12:08:17.428792 2474 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 19 12:08:17.429715 kubelet[2474]: I0119 12:08:17.429595 2474 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 19 12:08:17.430878 kubelet[2474]: I0119 12:08:17.430711 2474 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 19 12:08:17.430878 kubelet[2474]: I0119 12:08:17.430836 2474 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 19 12:08:17.433276 kubelet[2474]: I0119 12:08:17.432882 2474 server.go:310] "Adding debug handlers to kubelet server" Jan 19 12:08:17.439339 kubelet[2474]: E0119 12:08:17.434903 2474 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.55:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.55:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c20823f2f7fb8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-19 12:08:17.427677112 +0000 UTC m=+0.876370596,LastTimestamp:2026-01-19 12:08:17.427677112 +0000 UTC m=+0.876370596,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 19 12:08:17.439339 kubelet[2474]: I0119 12:08:17.436904 2474 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 19 12:08:17.439339 kubelet[2474]: E0119 12:08:17.438527 2474 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 19 12:08:17.439339 kubelet[2474]: I0119 12:08:17.438554 2474 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 19 12:08:17.439339 kubelet[2474]: I0119 12:08:17.438797 2474 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 19 12:08:17.439339 kubelet[2474]: I0119 12:08:17.438835 2474 reconciler.go:29] "Reconciler: start to sync state" Jan 19 12:08:17.439698 kubelet[2474]: E0119 12:08:17.439655 2474 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 19 12:08:17.442032 kubelet[2474]: I0119 12:08:17.440752 2474 factory.go:223] Registration of the systemd container factory successfully Jan 19 12:08:17.442032 kubelet[2474]: I0119 12:08:17.441061 2474 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 19 12:08:17.442032 kubelet[2474]: E0119 12:08:17.441705 2474 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 19 12:08:17.442797 kubelet[2474]: E0119 12:08:17.442525 2474 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="200ms" Jan 19 12:08:17.445337 kubelet[2474]: I0119 12:08:17.444731 2474 factory.go:223] Registration of the containerd container factory successfully Jan 19 12:08:17.471000 audit[2495]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:17.471000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd66a79260 a2=0 a3=0 items=0 ppid=2474 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:17.471000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 19 12:08:17.478000 audit[2497]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:17.478000 audit[2497]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcba5546a0 a2=0 a3=0 items=0 ppid=2474 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:17.478000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 19 12:08:17.486750 kubelet[2474]: I0119 12:08:17.486490 2474 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 19 12:08:17.486750 kubelet[2474]: I0119 12:08:17.486635 2474 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 19 12:08:17.486750 kubelet[2474]: I0119 12:08:17.486663 2474 state_mem.go:36] "Initialized new in-memory state store" Jan 19 12:08:17.493510 kubelet[2474]: I0119 12:08:17.493046 2474 policy_none.go:49] "None policy: Start" Jan 19 12:08:17.493510 kubelet[2474]: I0119 12:08:17.493430 2474 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 19 12:08:17.493510 kubelet[2474]: I0119 12:08:17.493448 2474 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 19 12:08:17.494000 audit[2499]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:17.494000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffca4ebdc10 a2=0 a3=0 items=0 ppid=2474 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:17.494000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 19 12:08:17.496634 kubelet[2474]: I0119 12:08:17.496515 2474 policy_none.go:47] "Start" Jan 19 12:08:17.507030 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 19 12:08:17.508000 audit[2501]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2501 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:17.508000 audit[2501]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff7c0a88a0 a2=0 a3=0 items=0 ppid=2474 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:17.508000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 19 12:08:17.539530 kubelet[2474]: E0119 12:08:17.539370 2474 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 19 12:08:17.540000 audit[2504]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2504 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:17.540000 audit[2504]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd1dd19af0 a2=0 a3=0 items=0 ppid=2474 pid=2504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:17.540000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Jan 19 12:08:17.541846 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 19 12:08:17.542736 kubelet[2474]: I0119 12:08:17.542455 2474 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 19 12:08:17.547000 audit[2506]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2506 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:17.547000 audit[2506]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff6e64abd0 a2=0 a3=0 items=0 ppid=2474 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:17.547000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 19 12:08:17.549300 kubelet[2474]: I0119 12:08:17.549281 2474 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 19 12:08:17.549455 kubelet[2474]: I0119 12:08:17.549445 2474 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 19 12:08:17.549513 kubelet[2474]: I0119 12:08:17.549504 2474 kubelet.go:2427] "Starting kubelet main sync loop" Jan 19 12:08:17.549810 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 19 12:08:17.550446 kubelet[2474]: E0119 12:08:17.549586 2474 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 19 12:08:17.551798 kubelet[2474]: E0119 12:08:17.551488 2474 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 19 12:08:17.554000 audit[2508]: NETFILTER_CFG table=mangle:48 family=10 entries=1 op=nft_register_chain pid=2508 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:17.554000 audit[2508]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe67adf5c0 a2=0 a3=0 items=0 ppid=2474 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:17.554000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 19 12:08:17.560861 kubelet[2474]: E0119 12:08:17.560843 2474 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 19 12:08:17.561000 audit[2507]: NETFILTER_CFG table=mangle:49 family=2 entries=1 op=nft_register_chain pid=2507 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:17.561000 audit[2507]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffffa1baa60 a2=0 a3=0 items=0 ppid=2474 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:17.561000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 19 12:08:17.562352 kubelet[2474]: I0119 12:08:17.561741 2474 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 19 12:08:17.562352 kubelet[2474]: I0119 12:08:17.561844 2474 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 19 12:08:17.563000 audit[2509]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=2509 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:17.563000 audit[2509]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe2ff75c90 a2=0 a3=0 items=0 ppid=2474 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:17.563000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 19 12:08:17.565740 kubelet[2474]: I0119 12:08:17.563679 2474 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 19 12:08:17.574000 audit[2511]: NETFILTER_CFG table=filter:51 family=10 entries=1 op=nft_register_chain pid=2511 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:17.574000 audit[2511]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd7de8d3b0 a2=0 a3=0 items=0 ppid=2474 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:17.575762 kubelet[2474]: E0119 12:08:17.573776 2474 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 19 12:08:17.575762 kubelet[2474]: E0119 12:08:17.575446 2474 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 19 12:08:17.574000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 19 12:08:17.579000 audit[2510]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=2510 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:17.579000 audit[2510]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd408ed220 a2=0 a3=0 items=0 ppid=2474 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:17.579000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 19 12:08:17.587000 audit[2512]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:17.587000 audit[2512]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec0e5c120 a2=0 a3=0 items=0 ppid=2474 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:17.587000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 19 12:08:17.645435 kubelet[2474]: E0119 12:08:17.644719 2474 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="400ms" Jan 19 12:08:17.665356 kubelet[2474]: I0119 12:08:17.664498 2474 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 19 12:08:17.665706 kubelet[2474]: E0119 12:08:17.665537 2474 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Jan 19 12:08:17.680766 systemd[1]: Created slice kubepods-burstable-pod4c53d9ab9be86bad6a4b13fa1e77def8.slice - libcontainer container kubepods-burstable-pod4c53d9ab9be86bad6a4b13fa1e77def8.slice. Jan 19 12:08:17.695391 kubelet[2474]: E0119 12:08:17.694821 2474 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 19 12:08:17.705320 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 19 12:08:17.729669 kubelet[2474]: E0119 12:08:17.729534 2474 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 19 12:08:17.736434 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 19 12:08:17.741457 kubelet[2474]: I0119 12:08:17.740854 2474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 19 12:08:17.741457 kubelet[2474]: I0119 12:08:17.740882 2474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 19 12:08:17.741457 kubelet[2474]: I0119 12:08:17.740899 2474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 19 12:08:17.741457 kubelet[2474]: I0119 12:08:17.740912 2474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c53d9ab9be86bad6a4b13fa1e77def8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4c53d9ab9be86bad6a4b13fa1e77def8\") " pod="kube-system/kube-apiserver-localhost" Jan 19 12:08:17.741457 kubelet[2474]: I0119 12:08:17.741038 2474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c53d9ab9be86bad6a4b13fa1e77def8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4c53d9ab9be86bad6a4b13fa1e77def8\") " pod="kube-system/kube-apiserver-localhost" Jan 19 12:08:17.742075 kubelet[2474]: I0119 12:08:17.741053 2474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c53d9ab9be86bad6a4b13fa1e77def8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4c53d9ab9be86bad6a4b13fa1e77def8\") " pod="kube-system/kube-apiserver-localhost" Jan 19 12:08:17.742075 kubelet[2474]: I0119 12:08:17.741069 2474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 19 12:08:17.742075 kubelet[2474]: I0119 12:08:17.741086 2474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 19 12:08:17.742075 kubelet[2474]: I0119 12:08:17.741409 2474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 19 12:08:17.742542 kubelet[2474]: E0119 12:08:17.742512 2474 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 19 12:08:17.871030 kubelet[2474]: I0119 12:08:17.870595 2474 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 19 12:08:17.872914 kubelet[2474]: E0119 12:08:17.871882 2474 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Jan 19 12:08:18.003768 kubelet[2474]: E0119 12:08:18.003641 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:18.005595 containerd[1618]: time="2026-01-19T12:08:18.005347902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4c53d9ab9be86bad6a4b13fa1e77def8,Namespace:kube-system,Attempt:0,}" Jan 19 12:08:18.036413 kubelet[2474]: E0119 12:08:18.035887 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:18.040016 containerd[1618]: time="2026-01-19T12:08:18.039773410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 19 12:08:18.046465 kubelet[2474]: E0119 12:08:18.045912 2474 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="800ms" Jan 19 12:08:18.050008 kubelet[2474]: E0119 12:08:18.049081 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:18.050716 containerd[1618]: time="2026-01-19T12:08:18.050563457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 19 12:08:18.277025 kubelet[2474]: I0119 12:08:18.276413 2474 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 19 12:08:18.277025 kubelet[2474]: E0119 12:08:18.276675 2474 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Jan 19 12:08:18.334776 kubelet[2474]: E0119 12:08:18.334644 2474 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 19 12:08:18.372511 kubelet[2474]: E0119 12:08:18.372469 2474 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 19 12:08:18.466774 kubelet[2474]: E0119 12:08:18.466646 2474 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 19 12:08:18.614347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1045721.mount: Deactivated successfully. Jan 19 12:08:18.635495 containerd[1618]: time="2026-01-19T12:08:18.634827109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 19 12:08:18.643647 containerd[1618]: time="2026-01-19T12:08:18.643033346Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 19 12:08:18.657887 containerd[1618]: time="2026-01-19T12:08:18.657367715Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 19 12:08:18.665720 containerd[1618]: time="2026-01-19T12:08:18.665632238Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 19 12:08:18.667523 containerd[1618]: time="2026-01-19T12:08:18.667407925Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 19 12:08:18.674817 containerd[1618]: time="2026-01-19T12:08:18.674452956Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 19 12:08:18.677788 containerd[1618]: time="2026-01-19T12:08:18.677528132Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 19 12:08:18.680676 containerd[1618]: time="2026-01-19T12:08:18.680477624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 19 12:08:18.682037 containerd[1618]: time="2026-01-19T12:08:18.681751251Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 667.666441ms" Jan 19 12:08:18.685675 containerd[1618]: time="2026-01-19T12:08:18.685419882Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 637.707612ms" Jan 19 12:08:18.688348 containerd[1618]: time="2026-01-19T12:08:18.687649041Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 628.89268ms" Jan 19 12:08:18.774406 containerd[1618]: time="2026-01-19T12:08:18.773678185Z" level=info msg="connecting to shim 15a16aad4a1d09db70d9b9e924ad808ce212e847e18b2d412a0629e2358d1304" address="unix:///run/containerd/s/f8f56b7da2e30df393ccce0b97784070f46681b02112ea79ef3e1f632697ac69" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:08:18.783447 containerd[1618]: time="2026-01-19T12:08:18.783411376Z" level=info msg="connecting to shim b32d2f9e37a6e0cd6020f5c4ada392d03af1e4b8bcf5411e73f8bd31adba4646" address="unix:///run/containerd/s/32c65d72f6c286f4fbab4de5f8c1eecca3389c5f2a1fb571e95dcbb526e492a3" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:08:18.783907 containerd[1618]: time="2026-01-19T12:08:18.783758134Z" level=info msg="connecting to shim 0075fc88d3742d21e43cf798c4e1d1e439cc9ca1f50f352377e7842649e3cd5a" address="unix:///run/containerd/s/1effe34d7968d046065c96b55ba374c599bbd43f0e75c72dc3c2465907c672d7" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:08:18.852571 kubelet[2474]: E0119 12:08:18.852414 2474 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="1.6s" Jan 19 12:08:18.884614 systemd[1]: Started cri-containerd-15a16aad4a1d09db70d9b9e924ad808ce212e847e18b2d412a0629e2358d1304.scope - libcontainer container 15a16aad4a1d09db70d9b9e924ad808ce212e847e18b2d412a0629e2358d1304. Jan 19 12:08:18.917811 systemd[1]: Started cri-containerd-b32d2f9e37a6e0cd6020f5c4ada392d03af1e4b8bcf5411e73f8bd31adba4646.scope - libcontainer container b32d2f9e37a6e0cd6020f5c4ada392d03af1e4b8bcf5411e73f8bd31adba4646. Jan 19 12:08:18.923000 audit: BPF prog-id=83 op=LOAD Jan 19 12:08:18.923000 audit: BPF prog-id=84 op=LOAD Jan 19 12:08:18.923000 audit[2569]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2531 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:18.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613136616164346131643039646237306439623965393234616438 Jan 19 12:08:18.924000 audit: BPF prog-id=84 op=UNLOAD Jan 19 12:08:18.924000 audit[2569]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2531 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:18.924000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613136616164346131643039646237306439623965393234616438 Jan 19 12:08:18.924000 audit: BPF prog-id=85 op=LOAD Jan 19 12:08:18.924000 audit[2569]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2531 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:18.924000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613136616164346131643039646237306439623965393234616438 Jan 19 12:08:18.925000 audit: BPF prog-id=86 op=LOAD Jan 19 12:08:18.925000 audit[2569]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2531 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:18.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613136616164346131643039646237306439623965393234616438 Jan 19 12:08:18.925000 audit: BPF prog-id=86 op=UNLOAD Jan 19 12:08:18.925000 audit[2569]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2531 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:18.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613136616164346131643039646237306439623965393234616438 Jan 19 12:08:18.925000 audit: BPF prog-id=85 op=UNLOAD Jan 19 12:08:18.925000 audit[2569]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2531 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:18.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613136616164346131643039646237306439623965393234616438 Jan 19 12:08:18.926000 audit: BPF prog-id=87 op=LOAD Jan 19 12:08:18.926000 audit[2569]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2531 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:18.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613136616164346131643039646237306439623965393234616438 Jan 19 12:08:18.951000 audit: BPF prog-id=88 op=LOAD Jan 19 12:08:18.956000 audit: BPF prog-id=89 op=LOAD Jan 19 12:08:18.956000 audit[2585]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2548 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:18.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233326432663965333761366530636436303230663563346164613339 Jan 19 12:08:18.956000 audit: BPF prog-id=89 op=UNLOAD Jan 19 12:08:18.956000 audit[2585]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2548 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:18.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233326432663965333761366530636436303230663563346164613339 Jan 19 12:08:18.958000 audit: BPF prog-id=90 op=LOAD Jan 19 12:08:18.958000 audit[2585]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2548 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:18.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233326432663965333761366530636436303230663563346164613339 Jan 19 12:08:18.959000 audit: BPF prog-id=91 op=LOAD Jan 19 12:08:18.959000 audit[2585]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2548 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:18.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233326432663965333761366530636436303230663563346164613339 Jan 19 12:08:18.959000 audit: BPF prog-id=91 op=UNLOAD Jan 19 12:08:18.959000 audit[2585]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2548 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:18.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233326432663965333761366530636436303230663563346164613339 Jan 19 12:08:18.959000 audit: BPF prog-id=90 op=UNLOAD Jan 19 12:08:18.959000 audit[2585]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2548 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:18.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233326432663965333761366530636436303230663563346164613339 Jan 19 12:08:18.959000 audit: BPF prog-id=92 op=LOAD Jan 19 12:08:18.959000 audit[2585]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2548 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:18.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233326432663965333761366530636436303230663563346164613339 Jan 19 12:08:18.977470 systemd[1]: Started cri-containerd-0075fc88d3742d21e43cf798c4e1d1e439cc9ca1f50f352377e7842649e3cd5a.scope - libcontainer container 0075fc88d3742d21e43cf798c4e1d1e439cc9ca1f50f352377e7842649e3cd5a. Jan 19 12:08:18.995447 kubelet[2474]: E0119 12:08:18.995421 2474 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 19 12:08:19.045000 audit: BPF prog-id=93 op=LOAD Jan 19 12:08:19.049811 containerd[1618]: time="2026-01-19T12:08:19.049783546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"15a16aad4a1d09db70d9b9e924ad808ce212e847e18b2d412a0629e2358d1304\"" Jan 19 12:08:19.050000 audit: BPF prog-id=94 op=LOAD Jan 19 12:08:19.050000 audit[2588]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=2546 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030373566633838643337343264323165343363663739386334653164 Jan 19 12:08:19.052000 audit: BPF prog-id=94 op=UNLOAD Jan 19 12:08:19.052000 audit[2588]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2546 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030373566633838643337343264323165343363663739386334653164 Jan 19 12:08:19.058385 kubelet[2474]: E0119 12:08:19.057567 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:19.052000 audit: BPF prog-id=95 op=LOAD Jan 19 12:08:19.052000 audit[2588]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=2546 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030373566633838643337343264323165343363663739386334653164 Jan 19 12:08:19.057000 audit: BPF prog-id=96 op=LOAD Jan 19 12:08:19.057000 audit[2588]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=2546 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.057000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030373566633838643337343264323165343363663739386334653164 Jan 19 12:08:19.057000 audit: BPF prog-id=96 op=UNLOAD Jan 19 12:08:19.057000 audit[2588]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2546 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.057000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030373566633838643337343264323165343363663739386334653164 Jan 19 12:08:19.057000 audit: BPF prog-id=95 op=UNLOAD Jan 19 12:08:19.057000 audit[2588]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2546 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.057000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030373566633838643337343264323165343363663739386334653164 Jan 19 12:08:19.059000 audit: BPF prog-id=97 op=LOAD Jan 19 12:08:19.059000 audit[2588]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=2546 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.059000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030373566633838643337343264323165343363663739386334653164 Jan 19 12:08:19.080517 containerd[1618]: time="2026-01-19T12:08:19.080468188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"b32d2f9e37a6e0cd6020f5c4ada392d03af1e4b8bcf5411e73f8bd31adba4646\"" Jan 19 12:08:19.081226 kubelet[2474]: I0119 12:08:19.080804 2474 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 19 12:08:19.081456 containerd[1618]: time="2026-01-19T12:08:19.081435393Z" level=info msg="CreateContainer within sandbox \"15a16aad4a1d09db70d9b9e924ad808ce212e847e18b2d412a0629e2358d1304\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 19 12:08:19.082300 kubelet[2474]: E0119 12:08:19.081721 2474 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Jan 19 12:08:19.083309 kubelet[2474]: E0119 12:08:19.081732 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:19.100616 containerd[1618]: time="2026-01-19T12:08:19.100595004Z" level=info msg="CreateContainer within sandbox \"b32d2f9e37a6e0cd6020f5c4ada392d03af1e4b8bcf5411e73f8bd31adba4646\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 19 12:08:19.117877 containerd[1618]: time="2026-01-19T12:08:19.117715061Z" level=info msg="Container 40dce788b4cf4eb7ae9b88671ce448cfe42ecf0d31bd9ea4544c3fdd8416610c: CDI devices from CRI Config.CDIDevices: []" Jan 19 12:08:19.144504 containerd[1618]: time="2026-01-19T12:08:19.143389225Z" level=info msg="Container e97f98bc089f9d7a21c2b32f504f1bd6a871878a6ea989d29fdbded909ba2299: CDI devices from CRI Config.CDIDevices: []" Jan 19 12:08:19.153466 containerd[1618]: time="2026-01-19T12:08:19.153444913Z" level=info msg="CreateContainer within sandbox \"15a16aad4a1d09db70d9b9e924ad808ce212e847e18b2d412a0629e2358d1304\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"40dce788b4cf4eb7ae9b88671ce448cfe42ecf0d31bd9ea4544c3fdd8416610c\"" Jan 19 12:08:19.158417 containerd[1618]: time="2026-01-19T12:08:19.157577943Z" level=info msg="StartContainer for \"40dce788b4cf4eb7ae9b88671ce448cfe42ecf0d31bd9ea4544c3fdd8416610c\"" Jan 19 12:08:19.164752 containerd[1618]: time="2026-01-19T12:08:19.164732640Z" level=info msg="connecting to shim 40dce788b4cf4eb7ae9b88671ce448cfe42ecf0d31bd9ea4544c3fdd8416610c" address="unix:///run/containerd/s/f8f56b7da2e30df393ccce0b97784070f46681b02112ea79ef3e1f632697ac69" protocol=ttrpc version=3 Jan 19 12:08:19.169894 containerd[1618]: time="2026-01-19T12:08:19.169765174Z" level=info msg="CreateContainer within sandbox \"b32d2f9e37a6e0cd6020f5c4ada392d03af1e4b8bcf5411e73f8bd31adba4646\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e97f98bc089f9d7a21c2b32f504f1bd6a871878a6ea989d29fdbded909ba2299\"" Jan 19 12:08:19.172818 containerd[1618]: time="2026-01-19T12:08:19.172613832Z" level=info msg="StartContainer for \"e97f98bc089f9d7a21c2b32f504f1bd6a871878a6ea989d29fdbded909ba2299\"" Jan 19 12:08:19.177042 containerd[1618]: time="2026-01-19T12:08:19.176758168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4c53d9ab9be86bad6a4b13fa1e77def8,Namespace:kube-system,Attempt:0,} returns sandbox id \"0075fc88d3742d21e43cf798c4e1d1e439cc9ca1f50f352377e7842649e3cd5a\"" Jan 19 12:08:19.179771 kubelet[2474]: E0119 12:08:19.179292 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:19.182833 containerd[1618]: time="2026-01-19T12:08:19.182597437Z" level=info msg="connecting to shim e97f98bc089f9d7a21c2b32f504f1bd6a871878a6ea989d29fdbded909ba2299" address="unix:///run/containerd/s/32c65d72f6c286f4fbab4de5f8c1eecca3389c5f2a1fb571e95dcbb526e492a3" protocol=ttrpc version=3 Jan 19 12:08:19.195483 containerd[1618]: time="2026-01-19T12:08:19.195405880Z" level=info msg="CreateContainer within sandbox \"0075fc88d3742d21e43cf798c4e1d1e439cc9ca1f50f352377e7842649e3cd5a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 19 12:08:19.216611 systemd[1]: Started cri-containerd-40dce788b4cf4eb7ae9b88671ce448cfe42ecf0d31bd9ea4544c3fdd8416610c.scope - libcontainer container 40dce788b4cf4eb7ae9b88671ce448cfe42ecf0d31bd9ea4544c3fdd8416610c. Jan 19 12:08:19.226298 containerd[1618]: time="2026-01-19T12:08:19.223775512Z" level=info msg="Container 497832428409e53fedf2e77843ce60a440f37cd72b5a39bfd6a93414b92bea00: CDI devices from CRI Config.CDIDevices: []" Jan 19 12:08:19.257320 containerd[1618]: time="2026-01-19T12:08:19.256848599Z" level=info msg="CreateContainer within sandbox \"0075fc88d3742d21e43cf798c4e1d1e439cc9ca1f50f352377e7842649e3cd5a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"497832428409e53fedf2e77843ce60a440f37cd72b5a39bfd6a93414b92bea00\"" Jan 19 12:08:19.260447 containerd[1618]: time="2026-01-19T12:08:19.259797730Z" level=info msg="StartContainer for \"497832428409e53fedf2e77843ce60a440f37cd72b5a39bfd6a93414b92bea00\"" Jan 19 12:08:19.268592 containerd[1618]: time="2026-01-19T12:08:19.266580232Z" level=info msg="connecting to shim 497832428409e53fedf2e77843ce60a440f37cd72b5a39bfd6a93414b92bea00" address="unix:///run/containerd/s/1effe34d7968d046065c96b55ba374c599bbd43f0e75c72dc3c2465907c672d7" protocol=ttrpc version=3 Jan 19 12:08:19.279800 systemd[1]: Started cri-containerd-e97f98bc089f9d7a21c2b32f504f1bd6a871878a6ea989d29fdbded909ba2299.scope - libcontainer container e97f98bc089f9d7a21c2b32f504f1bd6a871878a6ea989d29fdbded909ba2299. Jan 19 12:08:19.282000 audit: BPF prog-id=98 op=LOAD Jan 19 12:08:19.284000 audit: BPF prog-id=99 op=LOAD Jan 19 12:08:19.284000 audit[2657]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2531 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430646365373838623463663465623761653962383836373163653434 Jan 19 12:08:19.284000 audit: BPF prog-id=99 op=UNLOAD Jan 19 12:08:19.284000 audit[2657]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2531 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430646365373838623463663465623761653962383836373163653434 Jan 19 12:08:19.286000 audit: BPF prog-id=100 op=LOAD Jan 19 12:08:19.286000 audit[2657]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2531 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430646365373838623463663465623761653962383836373163653434 Jan 19 12:08:19.286000 audit: BPF prog-id=101 op=LOAD Jan 19 12:08:19.286000 audit[2657]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2531 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430646365373838623463663465623761653962383836373163653434 Jan 19 12:08:19.286000 audit: BPF prog-id=101 op=UNLOAD Jan 19 12:08:19.286000 audit[2657]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2531 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430646365373838623463663465623761653962383836373163653434 Jan 19 12:08:19.286000 audit: BPF prog-id=100 op=UNLOAD Jan 19 12:08:19.286000 audit[2657]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2531 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430646365373838623463663465623761653962383836373163653434 Jan 19 12:08:19.286000 audit: BPF prog-id=102 op=LOAD Jan 19 12:08:19.286000 audit[2657]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2531 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430646365373838623463663465623761653962383836373163653434 Jan 19 12:08:19.326818 systemd[1]: Started cri-containerd-497832428409e53fedf2e77843ce60a440f37cd72b5a39bfd6a93414b92bea00.scope - libcontainer container 497832428409e53fedf2e77843ce60a440f37cd72b5a39bfd6a93414b92bea00. Jan 19 12:08:19.341000 audit: BPF prog-id=103 op=LOAD Jan 19 12:08:19.344000 audit: BPF prog-id=104 op=LOAD Jan 19 12:08:19.344000 audit[2664]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2548 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539376639386263303839663964376132316332623332663530346631 Jan 19 12:08:19.344000 audit: BPF prog-id=104 op=UNLOAD Jan 19 12:08:19.344000 audit[2664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2548 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539376639386263303839663964376132316332623332663530346631 Jan 19 12:08:19.355000 audit: BPF prog-id=105 op=LOAD Jan 19 12:08:19.355000 audit[2664]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2548 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.355000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539376639386263303839663964376132316332623332663530346631 Jan 19 12:08:19.357000 audit: BPF prog-id=106 op=LOAD Jan 19 12:08:19.357000 audit[2664]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2548 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539376639386263303839663964376132316332623332663530346631 Jan 19 12:08:19.357000 audit: BPF prog-id=106 op=UNLOAD Jan 19 12:08:19.357000 audit[2664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2548 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539376639386263303839663964376132316332623332663530346631 Jan 19 12:08:19.357000 audit: BPF prog-id=105 op=UNLOAD Jan 19 12:08:19.357000 audit[2664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2548 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539376639386263303839663964376132316332623332663530346631 Jan 19 12:08:19.357000 audit: BPF prog-id=107 op=LOAD Jan 19 12:08:19.357000 audit[2664]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2548 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539376639386263303839663964376132316332623332663530346631 Jan 19 12:08:19.370000 audit: BPF prog-id=108 op=LOAD Jan 19 12:08:19.373000 audit: BPF prog-id=109 op=LOAD Jan 19 12:08:19.373000 audit[2689]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2546 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373833323432383430396535336665646632653737383433636536 Jan 19 12:08:19.378000 audit: BPF prog-id=109 op=UNLOAD Jan 19 12:08:19.378000 audit[2689]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2546 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.378000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373833323432383430396535336665646632653737383433636536 Jan 19 12:08:19.378000 audit: BPF prog-id=110 op=LOAD Jan 19 12:08:19.378000 audit[2689]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2546 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.378000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373833323432383430396535336665646632653737383433636536 Jan 19 12:08:19.379000 audit: BPF prog-id=111 op=LOAD Jan 19 12:08:19.379000 audit[2689]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2546 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.379000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373833323432383430396535336665646632653737383433636536 Jan 19 12:08:19.379000 audit: BPF prog-id=111 op=UNLOAD Jan 19 12:08:19.379000 audit[2689]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2546 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.379000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373833323432383430396535336665646632653737383433636536 Jan 19 12:08:19.379000 audit: BPF prog-id=110 op=UNLOAD Jan 19 12:08:19.379000 audit[2689]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2546 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.379000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373833323432383430396535336665646632653737383433636536 Jan 19 12:08:19.379000 audit: BPF prog-id=112 op=LOAD Jan 19 12:08:19.379000 audit[2689]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2546 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:19.379000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373833323432383430396535336665646632653737383433636536 Jan 19 12:08:19.382510 kubelet[2474]: E0119 12:08:19.382363 2474 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 19 12:08:19.419350 containerd[1618]: time="2026-01-19T12:08:19.419074407Z" level=info msg="StartContainer for \"40dce788b4cf4eb7ae9b88671ce448cfe42ecf0d31bd9ea4544c3fdd8416610c\" returns successfully" Jan 19 12:08:19.507303 containerd[1618]: time="2026-01-19T12:08:19.506646946Z" level=info msg="StartContainer for \"e97f98bc089f9d7a21c2b32f504f1bd6a871878a6ea989d29fdbded909ba2299\" returns successfully" Jan 19 12:08:19.526271 containerd[1618]: time="2026-01-19T12:08:19.523822847Z" level=info msg="StartContainer for \"497832428409e53fedf2e77843ce60a440f37cd72b5a39bfd6a93414b92bea00\" returns successfully" Jan 19 12:08:19.609873 kubelet[2474]: E0119 12:08:19.609527 2474 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 19 12:08:19.612256 kubelet[2474]: E0119 12:08:19.611776 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:19.617366 kubelet[2474]: E0119 12:08:19.616882 2474 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 19 12:08:19.617366 kubelet[2474]: E0119 12:08:19.617346 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:19.644857 kubelet[2474]: E0119 12:08:19.644364 2474 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 19 12:08:19.644857 kubelet[2474]: E0119 12:08:19.644474 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:20.655877 kubelet[2474]: E0119 12:08:20.654850 2474 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 19 12:08:20.655877 kubelet[2474]: E0119 12:08:20.655071 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:20.761489 kubelet[2474]: E0119 12:08:20.656442 2474 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 19 12:08:20.761489 kubelet[2474]: E0119 12:08:20.656517 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:20.761489 kubelet[2474]: E0119 12:08:20.656715 2474 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 19 12:08:20.761489 kubelet[2474]: E0119 12:08:20.656811 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:20.761489 kubelet[2474]: I0119 12:08:20.700773 2474 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 19 12:08:22.921305 kubelet[2474]: E0119 12:08:22.919566 2474 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 19 12:08:22.984310 kubelet[2474]: I0119 12:08:22.983319 2474 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 19 12:08:22.984310 kubelet[2474]: E0119 12:08:22.983356 2474 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 19 12:08:23.028639 kubelet[2474]: E0119 12:08:23.028052 2474 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c20823f2f7fb8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-19 12:08:17.427677112 +0000 UTC m=+0.876370596,LastTimestamp:2026-01-19 12:08:17.427677112 +0000 UTC m=+0.876370596,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 19 12:08:23.043262 kubelet[2474]: I0119 12:08:23.042529 2474 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 19 12:08:23.067329 kubelet[2474]: E0119 12:08:23.066642 2474 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 19 12:08:23.069425 kubelet[2474]: I0119 12:08:23.068812 2474 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 19 12:08:23.079240 kubelet[2474]: E0119 12:08:23.078363 2474 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 19 12:08:23.079240 kubelet[2474]: I0119 12:08:23.078486 2474 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 19 12:08:23.084650 kubelet[2474]: E0119 12:08:23.083306 2474 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 19 12:08:23.398407 kubelet[2474]: I0119 12:08:23.398030 2474 apiserver.go:52] "Watching apiserver" Jan 19 12:08:23.442628 kubelet[2474]: I0119 12:08:23.441604 2474 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 19 12:08:24.455863 kubelet[2474]: I0119 12:08:24.455799 2474 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 19 12:08:24.475584 kubelet[2474]: E0119 12:08:24.475484 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:24.670903 kubelet[2474]: E0119 12:08:24.670684 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:25.533037 systemd[1]: Reload requested from client PID 2772 ('systemctl') (unit session-8.scope)... Jan 19 12:08:25.533404 systemd[1]: Reloading... Jan 19 12:08:25.691577 zram_generator::config[2817]: No configuration found. Jan 19 12:08:26.132711 systemd[1]: Reloading finished in 598 ms. Jan 19 12:08:26.183626 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 12:08:26.187535 kubelet[2474]: I0119 12:08:26.187279 2474 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 19 12:08:26.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:08:26.208679 systemd[1]: kubelet.service: Deactivated successfully. Jan 19 12:08:26.209420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 12:08:26.209491 systemd[1]: kubelet.service: Consumed 2.573s CPU time, 128.1M memory peak. Jan 19 12:08:26.221876 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 19 12:08:26.222341 kernel: audit: type=1131 audit(1768824506.209:395): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:08:26.218604 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 12:08:26.220000 audit: BPF prog-id=113 op=LOAD Jan 19 12:08:26.283449 kernel: audit: type=1334 audit(1768824506.220:396): prog-id=113 op=LOAD Jan 19 12:08:26.283542 kernel: audit: type=1334 audit(1768824506.220:397): prog-id=68 op=UNLOAD Jan 19 12:08:26.220000 audit: BPF prog-id=68 op=UNLOAD Jan 19 12:08:26.220000 audit: BPF prog-id=114 op=LOAD Jan 19 12:08:26.295612 kernel: audit: type=1334 audit(1768824506.220:398): prog-id=114 op=LOAD Jan 19 12:08:26.220000 audit: BPF prog-id=115 op=LOAD Jan 19 12:08:26.308367 kernel: audit: type=1334 audit(1768824506.220:399): prog-id=115 op=LOAD Jan 19 12:08:26.308441 kernel: audit: type=1334 audit(1768824506.220:400): prog-id=69 op=UNLOAD Jan 19 12:08:26.220000 audit: BPF prog-id=69 op=UNLOAD Jan 19 12:08:26.220000 audit: BPF prog-id=70 op=UNLOAD Jan 19 12:08:26.342336 kernel: audit: type=1334 audit(1768824506.220:401): prog-id=70 op=UNLOAD Jan 19 12:08:26.342419 kernel: audit: type=1334 audit(1768824506.222:402): prog-id=116 op=LOAD Jan 19 12:08:26.222000 audit: BPF prog-id=116 op=LOAD Jan 19 12:08:26.354085 kernel: audit: type=1334 audit(1768824506.222:403): prog-id=79 op=UNLOAD Jan 19 12:08:26.222000 audit: BPF prog-id=79 op=UNLOAD Jan 19 12:08:26.226000 audit: BPF prog-id=117 op=LOAD Jan 19 12:08:26.226000 audit: BPF prog-id=74 op=UNLOAD Jan 19 12:08:26.227000 audit: BPF prog-id=118 op=LOAD Jan 19 12:08:26.228000 audit: BPF prog-id=76 op=UNLOAD Jan 19 12:08:26.228000 audit: BPF prog-id=119 op=LOAD Jan 19 12:08:26.228000 audit: BPF prog-id=120 op=LOAD Jan 19 12:08:26.228000 audit: BPF prog-id=77 op=UNLOAD Jan 19 12:08:26.228000 audit: BPF prog-id=78 op=UNLOAD Jan 19 12:08:26.230000 audit: BPF prog-id=121 op=LOAD Jan 19 12:08:26.230000 audit: BPF prog-id=80 op=UNLOAD Jan 19 12:08:26.365316 kernel: audit: type=1334 audit(1768824506.226:404): prog-id=117 op=LOAD Jan 19 12:08:26.230000 audit: BPF prog-id=122 op=LOAD Jan 19 12:08:26.230000 audit: BPF prog-id=123 op=LOAD Jan 19 12:08:26.230000 audit: BPF prog-id=81 op=UNLOAD Jan 19 12:08:26.230000 audit: BPF prog-id=82 op=UNLOAD Jan 19 12:08:26.234000 audit: BPF prog-id=124 op=LOAD Jan 19 12:08:26.234000 audit: BPF prog-id=63 op=UNLOAD Jan 19 12:08:26.234000 audit: BPF prog-id=125 op=LOAD Jan 19 12:08:26.234000 audit: BPF prog-id=126 op=LOAD Jan 19 12:08:26.234000 audit: BPF prog-id=64 op=UNLOAD Jan 19 12:08:26.234000 audit: BPF prog-id=65 op=UNLOAD Jan 19 12:08:26.235000 audit: BPF prog-id=127 op=LOAD Jan 19 12:08:26.235000 audit: BPF prog-id=75 op=UNLOAD Jan 19 12:08:26.239000 audit: BPF prog-id=128 op=LOAD Jan 19 12:08:26.239000 audit: BPF prog-id=71 op=UNLOAD Jan 19 12:08:26.239000 audit: BPF prog-id=129 op=LOAD Jan 19 12:08:26.239000 audit: BPF prog-id=130 op=LOAD Jan 19 12:08:26.239000 audit: BPF prog-id=72 op=UNLOAD Jan 19 12:08:26.239000 audit: BPF prog-id=73 op=UNLOAD Jan 19 12:08:26.243000 audit: BPF prog-id=131 op=LOAD Jan 19 12:08:26.243000 audit: BPF prog-id=132 op=LOAD Jan 19 12:08:26.243000 audit: BPF prog-id=66 op=UNLOAD Jan 19 12:08:26.243000 audit: BPF prog-id=67 op=UNLOAD Jan 19 12:08:26.702522 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 12:08:26.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:08:26.727916 (kubelet)[2862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 19 12:08:26.925328 kubelet[2862]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 19 12:08:26.925328 kubelet[2862]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 19 12:08:26.925328 kubelet[2862]: I0119 12:08:26.924490 2862 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 19 12:08:26.971276 kubelet[2862]: I0119 12:08:26.970607 2862 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 19 12:08:26.971276 kubelet[2862]: I0119 12:08:26.970725 2862 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 19 12:08:26.971276 kubelet[2862]: I0119 12:08:26.970753 2862 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 19 12:08:26.971276 kubelet[2862]: I0119 12:08:26.970763 2862 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 19 12:08:26.971276 kubelet[2862]: I0119 12:08:26.971046 2862 server.go:956] "Client rotation is on, will bootstrap in background" Jan 19 12:08:26.974733 kubelet[2862]: I0119 12:08:26.974469 2862 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 19 12:08:26.993535 kubelet[2862]: I0119 12:08:26.993221 2862 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 19 12:08:27.040597 kubelet[2862]: I0119 12:08:27.040530 2862 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 19 12:08:27.062406 kubelet[2862]: I0119 12:08:27.062366 2862 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 19 12:08:27.063294 kubelet[2862]: I0119 12:08:27.063069 2862 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 19 12:08:27.063669 kubelet[2862]: I0119 12:08:27.063412 2862 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 19 12:08:27.063669 kubelet[2862]: I0119 12:08:27.063590 2862 topology_manager.go:138] "Creating topology manager with none policy" Jan 19 12:08:27.063669 kubelet[2862]: I0119 12:08:27.063599 2862 container_manager_linux.go:306] "Creating device plugin manager" Jan 19 12:08:27.063669 kubelet[2862]: I0119 12:08:27.063625 2862 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 19 12:08:27.069664 kubelet[2862]: I0119 12:08:27.068288 2862 state_mem.go:36] "Initialized new in-memory state store" Jan 19 12:08:27.071287 kubelet[2862]: I0119 12:08:27.070650 2862 kubelet.go:475] "Attempting to sync node with API server" Jan 19 12:08:27.075537 kubelet[2862]: I0119 12:08:27.071629 2862 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 19 12:08:27.075537 kubelet[2862]: I0119 12:08:27.071662 2862 kubelet.go:387] "Adding apiserver pod source" Jan 19 12:08:27.075537 kubelet[2862]: I0119 12:08:27.071679 2862 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 19 12:08:27.083441 kubelet[2862]: I0119 12:08:27.081838 2862 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 19 12:08:27.090574 kubelet[2862]: I0119 12:08:27.087824 2862 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 19 12:08:27.090664 kubelet[2862]: I0119 12:08:27.090651 2862 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 19 12:08:27.156542 kubelet[2862]: I0119 12:08:27.156519 2862 server.go:1262] "Started kubelet" Jan 19 12:08:27.159431 kubelet[2862]: I0119 12:08:27.158635 2862 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 19 12:08:27.161816 kubelet[2862]: I0119 12:08:27.161687 2862 server.go:310] "Adding debug handlers to kubelet server" Jan 19 12:08:27.164294 kubelet[2862]: I0119 12:08:27.163752 2862 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 19 12:08:27.164294 kubelet[2862]: I0119 12:08:27.163897 2862 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 19 12:08:27.175046 kubelet[2862]: I0119 12:08:27.174497 2862 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 19 12:08:27.181837 kubelet[2862]: I0119 12:08:27.181618 2862 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 19 12:08:27.184319 kubelet[2862]: I0119 12:08:27.183737 2862 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 19 12:08:27.187063 kubelet[2862]: I0119 12:08:27.186550 2862 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 19 12:08:27.187063 kubelet[2862]: I0119 12:08:27.186749 2862 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 19 12:08:27.187063 kubelet[2862]: I0119 12:08:27.186878 2862 reconciler.go:29] "Reconciler: start to sync state" Jan 19 12:08:27.191851 kubelet[2862]: E0119 12:08:27.191610 2862 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 19 12:08:27.197572 kubelet[2862]: I0119 12:08:27.196456 2862 factory.go:223] Registration of the systemd container factory successfully Jan 19 12:08:27.203816 kubelet[2862]: I0119 12:08:27.202839 2862 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 19 12:08:27.221045 kubelet[2862]: I0119 12:08:27.219283 2862 factory.go:223] Registration of the containerd container factory successfully Jan 19 12:08:27.291891 kubelet[2862]: I0119 12:08:27.291711 2862 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 19 12:08:27.423625 kubelet[2862]: I0119 12:08:27.423382 2862 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 19 12:08:27.423625 kubelet[2862]: I0119 12:08:27.423406 2862 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 19 12:08:27.423625 kubelet[2862]: I0119 12:08:27.423426 2862 kubelet.go:2427] "Starting kubelet main sync loop" Jan 19 12:08:27.423625 kubelet[2862]: E0119 12:08:27.423469 2862 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 19 12:08:27.509285 kubelet[2862]: I0119 12:08:27.508636 2862 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 19 12:08:27.509285 kubelet[2862]: I0119 12:08:27.508755 2862 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 19 12:08:27.509285 kubelet[2862]: I0119 12:08:27.508772 2862 state_mem.go:36] "Initialized new in-memory state store" Jan 19 12:08:27.509285 kubelet[2862]: I0119 12:08:27.508879 2862 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 19 12:08:27.509285 kubelet[2862]: I0119 12:08:27.508887 2862 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 19 12:08:27.509285 kubelet[2862]: I0119 12:08:27.508902 2862 policy_none.go:49] "None policy: Start" Jan 19 12:08:27.509285 kubelet[2862]: I0119 12:08:27.508911 2862 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 19 12:08:27.509285 kubelet[2862]: I0119 12:08:27.508920 2862 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 19 12:08:27.509742 kubelet[2862]: I0119 12:08:27.509312 2862 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 19 12:08:27.509742 kubelet[2862]: I0119 12:08:27.509321 2862 policy_none.go:47] "Start" Jan 19 12:08:27.524784 kubelet[2862]: E0119 12:08:27.524349 2862 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 19 12:08:27.534652 kubelet[2862]: E0119 12:08:27.534479 2862 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 19 12:08:27.534713 kubelet[2862]: I0119 12:08:27.534664 2862 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 19 12:08:27.534713 kubelet[2862]: I0119 12:08:27.534677 2862 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 19 12:08:27.537404 kubelet[2862]: I0119 12:08:27.536796 2862 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 19 12:08:27.541836 kubelet[2862]: E0119 12:08:27.540895 2862 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 19 12:08:27.708680 kubelet[2862]: I0119 12:08:27.706367 2862 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 19 12:08:27.731274 kubelet[2862]: I0119 12:08:27.730492 2862 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 19 12:08:27.732785 kubelet[2862]: I0119 12:08:27.731703 2862 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 19 12:08:27.737634 kubelet[2862]: I0119 12:08:27.737493 2862 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 19 12:08:27.741505 kubelet[2862]: I0119 12:08:27.738887 2862 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 19 12:08:27.741505 kubelet[2862]: I0119 12:08:27.739078 2862 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 19 12:08:27.787404 kubelet[2862]: E0119 12:08:27.786459 2862 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 19 12:08:27.798496 kubelet[2862]: I0119 12:08:27.798392 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 19 12:08:27.798496 kubelet[2862]: I0119 12:08:27.798418 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 19 12:08:27.798496 kubelet[2862]: I0119 12:08:27.798436 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 19 12:08:27.798496 kubelet[2862]: I0119 12:08:27.798451 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c53d9ab9be86bad6a4b13fa1e77def8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4c53d9ab9be86bad6a4b13fa1e77def8\") " pod="kube-system/kube-apiserver-localhost" Jan 19 12:08:27.798496 kubelet[2862]: I0119 12:08:27.798468 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 19 12:08:27.798645 kubelet[2862]: I0119 12:08:27.798480 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 19 12:08:27.798645 kubelet[2862]: I0119 12:08:27.798493 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c53d9ab9be86bad6a4b13fa1e77def8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4c53d9ab9be86bad6a4b13fa1e77def8\") " pod="kube-system/kube-apiserver-localhost" Jan 19 12:08:27.798645 kubelet[2862]: I0119 12:08:27.798509 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c53d9ab9be86bad6a4b13fa1e77def8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4c53d9ab9be86bad6a4b13fa1e77def8\") " pod="kube-system/kube-apiserver-localhost" Jan 19 12:08:27.798645 kubelet[2862]: I0119 12:08:27.798520 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 19 12:08:28.055421 kubelet[2862]: E0119 12:08:28.053076 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:28.072553 kubelet[2862]: E0119 12:08:28.072521 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:28.082518 kubelet[2862]: I0119 12:08:28.082062 2862 apiserver.go:52] "Watching apiserver" Jan 19 12:08:28.087325 kubelet[2862]: E0119 12:08:28.086836 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:28.169401 kubelet[2862]: I0119 12:08:28.168509 2862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.168489834 podStartE2EDuration="1.168489834s" podCreationTimestamp="2026-01-19 12:08:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-19 12:08:28.14683833 +0000 UTC m=+1.397371081" watchObservedRunningTime="2026-01-19 12:08:28.168489834 +0000 UTC m=+1.419022596" Jan 19 12:08:28.191040 kubelet[2862]: I0119 12:08:28.189315 2862 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 19 12:08:28.200716 kubelet[2862]: I0119 12:08:28.200593 2862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.200580107 podStartE2EDuration="4.200580107s" podCreationTimestamp="2026-01-19 12:08:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-19 12:08:28.169398048 +0000 UTC m=+1.419930800" watchObservedRunningTime="2026-01-19 12:08:28.200580107 +0000 UTC m=+1.451112860" Jan 19 12:08:28.225059 kubelet[2862]: I0119 12:08:28.224834 2862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.224819315 podStartE2EDuration="1.224819315s" podCreationTimestamp="2026-01-19 12:08:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-19 12:08:28.201349233 +0000 UTC m=+1.451881985" watchObservedRunningTime="2026-01-19 12:08:28.224819315 +0000 UTC m=+1.475352077" Jan 19 12:08:28.496851 kubelet[2862]: E0119 12:08:28.495364 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:28.504490 kubelet[2862]: E0119 12:08:28.504466 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:28.506339 kubelet[2862]: E0119 12:08:28.504740 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:29.503294 kubelet[2862]: E0119 12:08:29.501682 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:29.506561 kubelet[2862]: E0119 12:08:29.506539 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:30.511265 kubelet[2862]: E0119 12:08:30.511067 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:30.752394 kubelet[2862]: E0119 12:08:30.751746 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:31.355399 kubelet[2862]: I0119 12:08:31.354556 2862 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 19 12:08:31.359449 containerd[1618]: time="2026-01-19T12:08:31.356619587Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 19 12:08:31.359883 kubelet[2862]: I0119 12:08:31.357062 2862 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 19 12:08:31.512524 kubelet[2862]: E0119 12:08:31.511673 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:32.291060 systemd[1]: Created slice kubepods-besteffort-pod099e1574_2cb6_49fb_b282_2b7ad4dbe529.slice - libcontainer container kubepods-besteffort-pod099e1574_2cb6_49fb_b282_2b7ad4dbe529.slice. Jan 19 12:08:32.357400 kubelet[2862]: I0119 12:08:32.356520 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/099e1574-2cb6-49fb-b282-2b7ad4dbe529-lib-modules\") pod \"kube-proxy-c8znj\" (UID: \"099e1574-2cb6-49fb-b282-2b7ad4dbe529\") " pod="kube-system/kube-proxy-c8znj" Jan 19 12:08:32.357400 kubelet[2862]: I0119 12:08:32.356567 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwzct\" (UniqueName: \"kubernetes.io/projected/099e1574-2cb6-49fb-b282-2b7ad4dbe529-kube-api-access-xwzct\") pod \"kube-proxy-c8znj\" (UID: \"099e1574-2cb6-49fb-b282-2b7ad4dbe529\") " pod="kube-system/kube-proxy-c8znj" Jan 19 12:08:32.357400 kubelet[2862]: I0119 12:08:32.356596 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/099e1574-2cb6-49fb-b282-2b7ad4dbe529-kube-proxy\") pod \"kube-proxy-c8znj\" (UID: \"099e1574-2cb6-49fb-b282-2b7ad4dbe529\") " pod="kube-system/kube-proxy-c8znj" Jan 19 12:08:32.357400 kubelet[2862]: I0119 12:08:32.356614 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/099e1574-2cb6-49fb-b282-2b7ad4dbe529-xtables-lock\") pod \"kube-proxy-c8znj\" (UID: \"099e1574-2cb6-49fb-b282-2b7ad4dbe529\") " pod="kube-system/kube-proxy-c8znj" Jan 19 12:08:32.462588 kubelet[2862]: I0119 12:08:32.462074 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/438a0aa4-9d6e-4230-b772-7a94e1adb5e7-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-fvhml\" (UID: \"438a0aa4-9d6e-4230-b772-7a94e1adb5e7\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-fvhml" Jan 19 12:08:32.463761 kubelet[2862]: I0119 12:08:32.463742 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mwxx\" (UniqueName: \"kubernetes.io/projected/438a0aa4-9d6e-4230-b772-7a94e1adb5e7-kube-api-access-2mwxx\") pod \"tigera-operator-65cdcdfd6d-fvhml\" (UID: \"438a0aa4-9d6e-4230-b772-7a94e1adb5e7\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-fvhml" Jan 19 12:08:32.478501 systemd[1]: Created slice kubepods-besteffort-pod438a0aa4_9d6e_4230_b772_7a94e1adb5e7.slice - libcontainer container kubepods-besteffort-pod438a0aa4_9d6e_4230_b772_7a94e1adb5e7.slice. Jan 19 12:08:32.614864 kubelet[2862]: E0119 12:08:32.614571 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:32.630768 containerd[1618]: time="2026-01-19T12:08:32.629526174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c8znj,Uid:099e1574-2cb6-49fb-b282-2b7ad4dbe529,Namespace:kube-system,Attempt:0,}" Jan 19 12:08:32.788531 containerd[1618]: time="2026-01-19T12:08:32.788491260Z" level=info msg="connecting to shim f1199e7610521b853b3e34a1a8b3ba07ffe8e5430b348752d089f4a9573cefca" address="unix:///run/containerd/s/19b950f298f95c4fb343899fcb0ee32c51ef97c64362c0e498bd12a56a2fb9ff" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:08:32.818695 containerd[1618]: time="2026-01-19T12:08:32.816902063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-fvhml,Uid:438a0aa4-9d6e-4230-b772-7a94e1adb5e7,Namespace:tigera-operator,Attempt:0,}" Jan 19 12:08:32.898458 containerd[1618]: time="2026-01-19T12:08:32.897916253Z" level=info msg="connecting to shim 1448b3e2977a62cc8c5b9102a72e5e856b58e89520e890a049b8c876af123636" address="unix:///run/containerd/s/6442e68680a098e41ad1e0cf9c374d57bfd4d2a5bdfc206e2db53e1a39bb86e0" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:08:32.951461 systemd[1]: Started cri-containerd-f1199e7610521b853b3e34a1a8b3ba07ffe8e5430b348752d089f4a9573cefca.scope - libcontainer container f1199e7610521b853b3e34a1a8b3ba07ffe8e5430b348752d089f4a9573cefca. Jan 19 12:08:33.014864 systemd[1]: Started cri-containerd-1448b3e2977a62cc8c5b9102a72e5e856b58e89520e890a049b8c876af123636.scope - libcontainer container 1448b3e2977a62cc8c5b9102a72e5e856b58e89520e890a049b8c876af123636. Jan 19 12:08:33.023000 audit: BPF prog-id=133 op=LOAD Jan 19 12:08:33.043471 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 19 12:08:33.043553 kernel: audit: type=1334 audit(1768824513.023:437): prog-id=133 op=LOAD Jan 19 12:08:33.024000 audit: BPF prog-id=134 op=LOAD Jan 19 12:08:33.024000 audit[2943]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2932 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:33.099046 kernel: audit: type=1334 audit(1768824513.024:438): prog-id=134 op=LOAD Jan 19 12:08:33.099353 kernel: audit: type=1300 audit(1768824513.024:438): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2932 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:33.099389 kernel: audit: type=1327 audit(1768824513.024:438): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631313939653736313035323162383533623365333461316138623362 Jan 19 12:08:33.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631313939653736313035323162383533623365333461316138623362 Jan 19 12:08:33.141502 kernel: audit: type=1334 audit(1768824513.024:439): prog-id=134 op=UNLOAD Jan 19 12:08:33.024000 audit: BPF prog-id=134 op=UNLOAD Jan 19 12:08:33.024000 audit[2943]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2932 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:33.169624 containerd[1618]: time="2026-01-19T12:08:33.169530501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c8znj,Uid:099e1574-2cb6-49fb-b282-2b7ad4dbe529,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1199e7610521b853b3e34a1a8b3ba07ffe8e5430b348752d089f4a9573cefca\"" Jan 19 12:08:33.172779 kubelet[2862]: E0119 12:08:33.172755 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:33.203549 kernel: audit: type=1300 audit(1768824513.024:439): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2932 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:33.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631313939653736313035323162383533623365333461316138623362 Jan 19 12:08:33.210422 containerd[1618]: time="2026-01-19T12:08:33.208674686Z" level=info msg="CreateContainer within sandbox \"f1199e7610521b853b3e34a1a8b3ba07ffe8e5430b348752d089f4a9573cefca\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 19 12:08:33.250555 kernel: audit: type=1327 audit(1768824513.024:439): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631313939653736313035323162383533623365333461316138623362 Jan 19 12:08:33.255640 containerd[1618]: time="2026-01-19T12:08:33.255599794Z" level=info msg="Container 7fe468cedc0ef960c539538ed36fb82a3aca8f7df1bd0ac6cfe969e9fc8c98db: CDI devices from CRI Config.CDIDevices: []" Jan 19 12:08:33.024000 audit: BPF prog-id=135 op=LOAD Jan 19 12:08:33.024000 audit[2943]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2932 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:33.325295 containerd[1618]: time="2026-01-19T12:08:33.311807106Z" level=info msg="CreateContainer within sandbox \"f1199e7610521b853b3e34a1a8b3ba07ffe8e5430b348752d089f4a9573cefca\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7fe468cedc0ef960c539538ed36fb82a3aca8f7df1bd0ac6cfe969e9fc8c98db\"" Jan 19 12:08:33.330290 kernel: audit: type=1334 audit(1768824513.024:440): prog-id=135 op=LOAD Jan 19 12:08:33.330344 kernel: audit: type=1300 audit(1768824513.024:440): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2932 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:33.330365 containerd[1618]: time="2026-01-19T12:08:33.328424521Z" level=info msg="StartContainer for \"7fe468cedc0ef960c539538ed36fb82a3aca8f7df1bd0ac6cfe969e9fc8c98db\"" Jan 19 12:08:33.330868 containerd[1618]: time="2026-01-19T12:08:33.330715195Z" level=info msg="connecting to shim 7fe468cedc0ef960c539538ed36fb82a3aca8f7df1bd0ac6cfe969e9fc8c98db" address="unix:///run/containerd/s/19b950f298f95c4fb343899fcb0ee32c51ef97c64362c0e498bd12a56a2fb9ff" protocol=ttrpc version=3 Jan 19 12:08:33.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631313939653736313035323162383533623365333461316138623362 Jan 19 12:08:33.402483 kernel: audit: type=1327 audit(1768824513.024:440): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631313939653736313035323162383533623365333461316138623362 Jan 19 12:08:33.024000 audit: BPF prog-id=136 op=LOAD Jan 19 12:08:33.024000 audit[2943]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2932 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:33.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631313939653736313035323162383533623365333461316138623362 Jan 19 12:08:33.024000 audit: BPF prog-id=136 op=UNLOAD Jan 19 12:08:33.024000 audit[2943]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2932 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:33.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631313939653736313035323162383533623365333461316138623362 Jan 19 12:08:33.024000 audit: BPF prog-id=135 op=UNLOAD Jan 19 12:08:33.024000 audit[2943]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2932 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:33.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631313939653736313035323162383533623365333461316138623362 Jan 19 12:08:33.024000 audit: BPF prog-id=137 op=LOAD Jan 19 12:08:33.024000 audit[2943]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2932 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:33.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631313939653736313035323162383533623365333461316138623362 Jan 19 12:08:33.078000 audit: BPF prog-id=138 op=LOAD Jan 19 12:08:33.086000 audit: BPF prog-id=139 op=LOAD Jan 19 12:08:33.086000 audit[2974]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2951 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:33.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134343862336532393737613632636338633562393130326137326535 Jan 19 12:08:33.086000 audit: BPF prog-id=139 op=UNLOAD Jan 19 12:08:33.086000 audit[2974]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2951 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:33.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134343862336532393737613632636338633562393130326137326535 Jan 19 12:08:33.087000 audit: BPF prog-id=140 op=LOAD Jan 19 12:08:33.087000 audit[2974]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2951 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:33.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134343862336532393737613632636338633562393130326137326535 Jan 19 12:08:33.087000 audit: BPF prog-id=141 op=LOAD Jan 19 12:08:33.087000 audit[2974]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2951 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:33.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134343862336532393737613632636338633562393130326137326535 Jan 19 12:08:33.087000 audit: BPF prog-id=141 op=UNLOAD Jan 19 12:08:33.087000 audit[2974]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2951 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:33.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134343862336532393737613632636338633562393130326137326535 Jan 19 12:08:33.087000 audit: BPF prog-id=140 op=UNLOAD Jan 19 12:08:33.087000 audit[2974]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2951 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:33.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134343862336532393737613632636338633562393130326137326535 Jan 19 12:08:33.088000 audit: BPF prog-id=142 op=LOAD Jan 19 12:08:33.088000 audit[2974]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2951 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:33.088000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134343862336532393737613632636338633562393130326137326535 Jan 19 12:08:33.460566 systemd[1]: Started cri-containerd-7fe468cedc0ef960c539538ed36fb82a3aca8f7df1bd0ac6cfe969e9fc8c98db.scope - libcontainer container 7fe468cedc0ef960c539538ed36fb82a3aca8f7df1bd0ac6cfe969e9fc8c98db. Jan 19 12:08:33.484745 containerd[1618]: time="2026-01-19T12:08:33.484712826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-fvhml,Uid:438a0aa4-9d6e-4230-b772-7a94e1adb5e7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1448b3e2977a62cc8c5b9102a72e5e856b58e89520e890a049b8c876af123636\"" Jan 19 12:08:33.499083 containerd[1618]: time="2026-01-19T12:08:33.498694330Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 19 12:08:33.589000 audit: BPF prog-id=143 op=LOAD Jan 19 12:08:33.589000 audit[3008]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=2932 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:33.589000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766653436386365646330656639363063353339353338656433366662 Jan 19 12:08:33.589000 audit: BPF prog-id=144 op=LOAD Jan 19 12:08:33.589000 audit[3008]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=2932 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:33.589000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766653436386365646330656639363063353339353338656433366662 Jan 19 12:08:33.589000 audit: BPF prog-id=144 op=UNLOAD Jan 19 12:08:33.589000 audit[3008]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2932 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:33.589000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766653436386365646330656639363063353339353338656433366662 Jan 19 12:08:33.589000 audit: BPF prog-id=143 op=UNLOAD Jan 19 12:08:33.589000 audit[3008]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2932 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:33.589000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766653436386365646330656639363063353339353338656433366662 Jan 19 12:08:33.589000 audit: BPF prog-id=145 op=LOAD Jan 19 12:08:33.589000 audit[3008]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=2932 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:33.589000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766653436386365646330656639363063353339353338656433366662 Jan 19 12:08:33.623469 kubelet[2862]: E0119 12:08:33.619594 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:33.706610 containerd[1618]: time="2026-01-19T12:08:33.706579885Z" level=info msg="StartContainer for \"7fe468cedc0ef960c539538ed36fb82a3aca8f7df1bd0ac6cfe969e9fc8c98db\" returns successfully" Jan 19 12:08:34.363000 audit[3083]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3083 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:34.363000 audit[3083]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeae989250 a2=0 a3=7ffeae98923c items=0 ppid=3029 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.363000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 19 12:08:34.364000 audit[3084]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:34.364000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff330b81d0 a2=0 a3=7fff330b81bc items=0 ppid=3029 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.364000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 19 12:08:34.393000 audit[3087]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:34.393000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe53a7f740 a2=0 a3=7ffe53a7f72c items=0 ppid=3029 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.393000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 19 12:08:34.395000 audit[3089]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=3089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:34.395000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd91ffa60 a2=0 a3=7ffdd91ffa4c items=0 ppid=3029 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.395000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 19 12:08:34.403000 audit[3091]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:34.403000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdfd62b370 a2=0 a3=7ffdfd62b35c items=0 ppid=3029 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.403000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 19 12:08:34.408000 audit[3092]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3092 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:34.408000 audit[3092]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8f134a70 a2=0 a3=7ffe8f134a5c items=0 ppid=3029 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.408000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 19 12:08:34.477000 audit[3093]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3093 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:34.477000 audit[3093]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe85736b50 a2=0 a3=7ffe85736b3c items=0 ppid=3029 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.477000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 19 12:08:34.506000 audit[3095]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:34.506000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff7a257a90 a2=0 a3=7fff7a257a7c items=0 ppid=3029 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.506000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Jan 19 12:08:34.537000 audit[3098]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:34.537000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe9fa76d10 a2=0 a3=7ffe9fa76cfc items=0 ppid=3029 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.537000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 19 12:08:34.548000 audit[3099]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3099 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:34.548000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcdb61c950 a2=0 a3=7ffcdb61c93c items=0 ppid=3029 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.548000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 19 12:08:34.570000 audit[3101]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:34.570000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffca42163f0 a2=0 a3=7ffca42163dc items=0 ppid=3029 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.570000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 19 12:08:34.571905 kubelet[2862]: E0119 12:08:34.571613 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:34.573305 kubelet[2862]: E0119 12:08:34.571824 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:34.592000 audit[3102]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3102 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:34.592000 audit[3102]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4f73c790 a2=0 a3=7ffd4f73c77c items=0 ppid=3029 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.592000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 19 12:08:34.619000 audit[3104]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3104 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:34.619000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff50d0f5e0 a2=0 a3=7fff50d0f5cc items=0 ppid=3029 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.619000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 19 12:08:34.621662 kubelet[2862]: I0119 12:08:34.620643 2862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c8znj" podStartSLOduration=2.620630309 podStartE2EDuration="2.620630309s" podCreationTimestamp="2026-01-19 12:08:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-19 12:08:34.611363785 +0000 UTC m=+7.861896536" watchObservedRunningTime="2026-01-19 12:08:34.620630309 +0000 UTC m=+7.871163061" Jan 19 12:08:34.668000 audit[3107]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:34.668000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdea993580 a2=0 a3=7ffdea99356c items=0 ppid=3029 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.668000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 19 12:08:34.679000 audit[3108]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3108 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:34.679000 audit[3108]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd03b596b0 a2=0 a3=7ffd03b5969c items=0 ppid=3029 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.679000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 19 12:08:34.705000 audit[3110]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3110 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:34.705000 audit[3110]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc1dc864f0 a2=0 a3=7ffc1dc864dc items=0 ppid=3029 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.705000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 19 12:08:34.713000 audit[3111]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3111 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:34.713000 audit[3111]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe546c5cf0 a2=0 a3=7ffe546c5cdc items=0 ppid=3029 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.713000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 19 12:08:34.727000 audit[3113]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3113 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:34.727000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc3dfd9990 a2=0 a3=7ffc3dfd997c items=0 ppid=3029 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.727000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Jan 19 12:08:34.756000 audit[3116]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:34.756000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe7d4f1790 a2=0 a3=7ffe7d4f177c items=0 ppid=3029 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.756000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 19 12:08:34.783000 audit[3119]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:34.783000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff6f403740 a2=0 a3=7fff6f40372c items=0 ppid=3029 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.783000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 19 12:08:34.795000 audit[3120]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3120 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:34.795000 audit[3120]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff810d7af0 a2=0 a3=7fff810d7adc items=0 ppid=3029 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.795000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 19 12:08:34.823000 audit[3122]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3122 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:34.823000 audit[3122]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffeb3c9ac0 a2=0 a3=7fffeb3c9aac items=0 ppid=3029 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.823000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 19 12:08:34.860000 audit[3125]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:34.860000 audit[3125]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeddf0e450 a2=0 a3=7ffeddf0e43c items=0 ppid=3029 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.860000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 19 12:08:34.868000 audit[3126]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3126 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:34.868000 audit[3126]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1781d090 a2=0 a3=7fff1781d07c items=0 ppid=3029 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.868000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 19 12:08:34.882000 audit[3128]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3128 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:08:34.882000 audit[3128]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffdc78316d0 a2=0 a3=7ffdc78316bc items=0 ppid=3029 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.882000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 19 12:08:34.989000 audit[3134]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3134 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:08:34.989000 audit[3134]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdaeae4120 a2=0 a3=7ffdaeae410c items=0 ppid=3029 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:34.989000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:08:35.009000 audit[3134]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3134 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:08:35.009000 audit[3134]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffdaeae4120 a2=0 a3=7ffdaeae410c items=0 ppid=3029 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:35.009000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:08:35.025000 audit[3139]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3139 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:35.025000 audit[3139]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe082cebb0 a2=0 a3=7ffe082ceb9c items=0 ppid=3029 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:35.025000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 19 12:08:35.052000 audit[3141]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3141 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:35.052000 audit[3141]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff91e68f40 a2=0 a3=7fff91e68f2c items=0 ppid=3029 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:35.052000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 19 12:08:35.076000 audit[3144]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3144 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:35.076000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffef53b56e0 a2=0 a3=7ffef53b56cc items=0 ppid=3029 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:35.076000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Jan 19 12:08:35.083000 audit[3145]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3145 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:35.083000 audit[3145]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8e229be0 a2=0 a3=7fff8e229bcc items=0 ppid=3029 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:35.083000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 19 12:08:35.097000 audit[3147]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3147 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:35.097000 audit[3147]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffe4279dc0 a2=0 a3=7fffe4279dac items=0 ppid=3029 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:35.097000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 19 12:08:35.104000 audit[3148]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3148 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:35.104000 audit[3148]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd614286d0 a2=0 a3=7ffd614286bc items=0 ppid=3029 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:35.104000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 19 12:08:35.146000 audit[3150]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3150 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:35.146000 audit[3150]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe6ec77380 a2=0 a3=7ffe6ec7736c items=0 ppid=3029 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:35.146000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 19 12:08:35.170000 audit[3153]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3153 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:35.170000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffeb2017010 a2=0 a3=7ffeb2016ffc items=0 ppid=3029 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:35.170000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 19 12:08:35.179000 audit[3154]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3154 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:35.179000 audit[3154]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff44950000 a2=0 a3=7fff4494ffec items=0 ppid=3029 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:35.179000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 19 12:08:35.199000 audit[3157]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3157 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:35.199000 audit[3157]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff084ab780 a2=0 a3=7fff084ab76c items=0 ppid=3029 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:35.199000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 19 12:08:35.206000 audit[3161]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3161 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:35.206000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe31576030 a2=0 a3=7ffe3157601c items=0 ppid=3029 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:35.206000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 19 12:08:35.228000 audit[3163]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3163 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:35.228000 audit[3163]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd06182ed0 a2=0 a3=7ffd06182ebc items=0 ppid=3029 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:35.228000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 19 12:08:35.253000 audit[3166]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3166 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:35.253000 audit[3166]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd330ee770 a2=0 a3=7ffd330ee75c items=0 ppid=3029 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:35.253000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 19 12:08:35.286000 audit[3169]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3169 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:35.286000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffef16e4f70 a2=0 a3=7ffef16e4f5c items=0 ppid=3029 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:35.286000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Jan 19 12:08:35.301000 audit[3170]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3170 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:35.301000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffa7716890 a2=0 a3=7fffa771687c items=0 ppid=3029 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:35.301000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 19 12:08:35.328000 audit[3172]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3172 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:35.328000 audit[3172]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffef23ecf0 a2=0 a3=7fffef23ecdc items=0 ppid=3029 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:35.328000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 19 12:08:35.327521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3906277770.mount: Deactivated successfully. Jan 19 12:08:35.355000 audit[3175]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3175 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:35.355000 audit[3175]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffa0d6c1b0 a2=0 a3=7fffa0d6c19c items=0 ppid=3029 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:35.355000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 19 12:08:35.366000 audit[3176]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3176 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:35.366000 audit[3176]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd074fa2e0 a2=0 a3=7ffd074fa2cc items=0 ppid=3029 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:35.366000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 19 12:08:35.389000 audit[3178]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3178 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:35.389000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff9201fff0 a2=0 a3=7fff9201ffdc items=0 ppid=3029 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:35.389000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 19 12:08:35.396000 audit[3179]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3179 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:35.396000 audit[3179]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe48b62120 a2=0 a3=7ffe48b6210c items=0 ppid=3029 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:35.396000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 19 12:08:35.418000 audit[3181]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:35.418000 audit[3181]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe4f648420 a2=0 a3=7ffe4f64840c items=0 ppid=3029 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:35.418000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 19 12:08:35.449000 audit[3184]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3184 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:08:35.449000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc42bd35d0 a2=0 a3=7ffc42bd35bc items=0 ppid=3029 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:35.449000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 19 12:08:35.474000 audit[3186]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 19 12:08:35.474000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff265d8980 a2=0 a3=7fff265d896c items=0 ppid=3029 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:35.474000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:08:35.475000 audit[3186]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3186 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 19 12:08:35.475000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff265d8980 a2=0 a3=7fff265d896c items=0 ppid=3029 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:35.475000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:08:35.594312 kubelet[2862]: E0119 12:08:35.593692 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:39.938399 containerd[1618]: time="2026-01-19T12:08:39.937686196Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:08:39.942459 containerd[1618]: time="2026-01-19T12:08:39.941578289Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558945" Jan 19 12:08:39.945559 containerd[1618]: time="2026-01-19T12:08:39.945529299Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:08:39.952932 containerd[1618]: time="2026-01-19T12:08:39.952907824Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:08:39.955543 containerd[1618]: time="2026-01-19T12:08:39.953913108Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 6.453065067s" Jan 19 12:08:39.955543 containerd[1618]: time="2026-01-19T12:08:39.953944136Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 19 12:08:39.982694 containerd[1618]: time="2026-01-19T12:08:39.982661813Z" level=info msg="CreateContainer within sandbox \"1448b3e2977a62cc8c5b9102a72e5e856b58e89520e890a049b8c876af123636\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 19 12:08:40.021483 containerd[1618]: time="2026-01-19T12:08:40.021438078Z" level=info msg="Container a4b65d74dce2bf9bed0d78e52fedaf5a635bd5d1e268b9fcee251a5113356b88: CDI devices from CRI Config.CDIDevices: []" Jan 19 12:08:40.043603 containerd[1618]: time="2026-01-19T12:08:40.043333846Z" level=info msg="CreateContainer within sandbox \"1448b3e2977a62cc8c5b9102a72e5e856b58e89520e890a049b8c876af123636\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a4b65d74dce2bf9bed0d78e52fedaf5a635bd5d1e268b9fcee251a5113356b88\"" Jan 19 12:08:40.046883 containerd[1618]: time="2026-01-19T12:08:40.045754819Z" level=info msg="StartContainer for \"a4b65d74dce2bf9bed0d78e52fedaf5a635bd5d1e268b9fcee251a5113356b88\"" Jan 19 12:08:40.046883 containerd[1618]: time="2026-01-19T12:08:40.046828318Z" level=info msg="connecting to shim a4b65d74dce2bf9bed0d78e52fedaf5a635bd5d1e268b9fcee251a5113356b88" address="unix:///run/containerd/s/6442e68680a098e41ad1e0cf9c374d57bfd4d2a5bdfc206e2db53e1a39bb86e0" protocol=ttrpc version=3 Jan 19 12:08:40.118474 systemd[1]: Started cri-containerd-a4b65d74dce2bf9bed0d78e52fedaf5a635bd5d1e268b9fcee251a5113356b88.scope - libcontainer container a4b65d74dce2bf9bed0d78e52fedaf5a635bd5d1e268b9fcee251a5113356b88. Jan 19 12:08:40.178000 audit: BPF prog-id=146 op=LOAD Jan 19 12:08:40.189698 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 19 12:08:40.189806 kernel: audit: type=1334 audit(1768824520.178:509): prog-id=146 op=LOAD Jan 19 12:08:40.205508 kernel: audit: type=1334 audit(1768824520.180:510): prog-id=147 op=LOAD Jan 19 12:08:40.180000 audit: BPF prog-id=147 op=LOAD Jan 19 12:08:40.180000 audit[3192]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=2951 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:40.269659 kernel: audit: type=1300 audit(1768824520.180:510): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=2951 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:40.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134623635643734646365326266396265643064373865353266656461 Jan 19 12:08:40.323284 kernel: audit: type=1327 audit(1768824520.180:510): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134623635643734646365326266396265643064373865353266656461 Jan 19 12:08:40.323368 kernel: audit: type=1334 audit(1768824520.180:511): prog-id=147 op=UNLOAD Jan 19 12:08:40.180000 audit: BPF prog-id=147 op=UNLOAD Jan 19 12:08:40.180000 audit[3192]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2951 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:40.386792 kernel: audit: type=1300 audit(1768824520.180:511): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2951 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:40.437885 kernel: audit: type=1327 audit(1768824520.180:511): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134623635643734646365326266396265643064373865353266656461 Jan 19 12:08:40.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134623635643734646365326266396265643064373865353266656461 Jan 19 12:08:40.181000 audit: BPF prog-id=148 op=LOAD Jan 19 12:08:40.181000 audit[3192]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=2951 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:40.506813 kernel: audit: type=1334 audit(1768824520.181:512): prog-id=148 op=LOAD Jan 19 12:08:40.507426 kernel: audit: type=1300 audit(1768824520.181:512): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=2951 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:40.507462 kernel: audit: type=1327 audit(1768824520.181:512): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134623635643734646365326266396265643064373865353266656461 Jan 19 12:08:40.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134623635643734646365326266396265643064373865353266656461 Jan 19 12:08:40.181000 audit: BPF prog-id=149 op=LOAD Jan 19 12:08:40.181000 audit[3192]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=2951 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:40.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134623635643734646365326266396265643064373865353266656461 Jan 19 12:08:40.181000 audit: BPF prog-id=149 op=UNLOAD Jan 19 12:08:40.181000 audit[3192]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2951 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:40.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134623635643734646365326266396265643064373865353266656461 Jan 19 12:08:40.181000 audit: BPF prog-id=148 op=UNLOAD Jan 19 12:08:40.181000 audit[3192]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2951 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:40.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134623635643734646365326266396265643064373865353266656461 Jan 19 12:08:40.181000 audit: BPF prog-id=150 op=LOAD Jan 19 12:08:40.181000 audit[3192]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=2951 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:40.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134623635643734646365326266396265643064373865353266656461 Jan 19 12:08:40.567915 containerd[1618]: time="2026-01-19T12:08:40.567560908Z" level=info msg="StartContainer for \"a4b65d74dce2bf9bed0d78e52fedaf5a635bd5d1e268b9fcee251a5113356b88\" returns successfully" Jan 19 12:08:48.206000 audit[1826]: USER_END pid=1826 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 12:08:48.206800 sudo[1826]: pam_unix(sudo:session): session closed for user root Jan 19 12:08:48.246332 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 19 12:08:48.246420 kernel: audit: type=1106 audit(1768824528.206:517): pid=1826 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 12:08:48.256600 sshd[1825]: Connection closed by 10.0.0.1 port 58770 Jan 19 12:08:48.257759 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Jan 19 12:08:48.206000 audit[1826]: CRED_DISP pid=1826 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 12:08:48.273965 systemd[1]: sshd@6-10.0.0.55:22-10.0.0.1:58770.service: Deactivated successfully. Jan 19 12:08:48.284830 systemd[1]: session-8.scope: Deactivated successfully. Jan 19 12:08:48.285581 systemd[1]: session-8.scope: Consumed 10.268s CPU time, 223.6M memory peak. Jan 19 12:08:48.287741 systemd-logind[1598]: Session 8 logged out. Waiting for processes to exit. Jan 19 12:08:48.300756 kernel: audit: type=1104 audit(1768824528.206:518): pid=1826 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 12:08:48.300800 kernel: audit: type=1106 audit(1768824528.261:519): pid=1821 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:08:48.261000 audit[1821]: USER_END pid=1821 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:08:48.292773 systemd-logind[1598]: Removed session 8. Jan 19 12:08:48.406787 kernel: audit: type=1104 audit(1768824528.261:520): pid=1821 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:08:48.261000 audit[1821]: CRED_DISP pid=1821 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:08:48.412337 kernel: audit: type=1131 audit(1768824528.276:521): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.55:22-10.0.0.1:58770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:08:48.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.55:22-10.0.0.1:58770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:08:49.614000 audit[3284]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=3284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:08:49.614000 audit[3284]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcd9c16110 a2=0 a3=7ffcd9c160fc items=0 ppid=3029 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:49.723576 kernel: audit: type=1325 audit(1768824529.614:522): table=filter:105 family=2 entries=14 op=nft_register_rule pid=3284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:08:49.723701 kernel: audit: type=1300 audit(1768824529.614:522): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcd9c16110 a2=0 a3=7ffcd9c160fc items=0 ppid=3029 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:49.614000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:08:49.754640 kernel: audit: type=1327 audit(1768824529.614:522): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:08:49.753000 audit[3284]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:08:49.829697 kernel: audit: type=1325 audit(1768824529.753:523): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:08:49.753000 audit[3284]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcd9c16110 a2=0 a3=0 items=0 ppid=3029 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:49.895418 kernel: audit: type=1300 audit(1768824529.753:523): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcd9c16110 a2=0 a3=0 items=0 ppid=3029 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:49.753000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:08:50.897000 audit[3286]: NETFILTER_CFG table=filter:107 family=2 entries=15 op=nft_register_rule pid=3286 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:08:50.897000 audit[3286]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd18c54b70 a2=0 a3=7ffd18c54b5c items=0 ppid=3029 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:50.897000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:08:50.911000 audit[3286]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3286 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:08:50.911000 audit[3286]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd18c54b70 a2=0 a3=0 items=0 ppid=3029 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:50.911000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:08:55.410000 audit[3288]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3288 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:08:55.421815 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 19 12:08:55.421896 kernel: audit: type=1325 audit(1768824535.410:526): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3288 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:08:55.452672 kernel: audit: type=1300 audit(1768824535.410:526): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc6b9e4410 a2=0 a3=7ffc6b9e43fc items=0 ppid=3029 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:55.410000 audit[3288]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc6b9e4410 a2=0 a3=7ffc6b9e43fc items=0 ppid=3029 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:55.410000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:08:55.551729 kernel: audit: type=1327 audit(1768824535.410:526): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:08:55.553679 kernel: audit: type=1325 audit(1768824535.518:527): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3288 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:08:55.518000 audit[3288]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3288 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:08:55.585733 kernel: audit: type=1300 audit(1768824535.518:527): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc6b9e4410 a2=0 a3=0 items=0 ppid=3029 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:55.518000 audit[3288]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc6b9e4410 a2=0 a3=0 items=0 ppid=3029 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:55.518000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:08:55.683518 kernel: audit: type=1327 audit(1768824535.518:527): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:08:56.719000 audit[3290]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3290 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:08:56.759423 kernel: audit: type=1325 audit(1768824536.719:528): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3290 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:08:56.719000 audit[3290]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc22689940 a2=0 a3=7ffc2268992c items=0 ppid=3029 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:56.865589 kernel: audit: type=1300 audit(1768824536.719:528): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc22689940 a2=0 a3=7ffc2268992c items=0 ppid=3029 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:56.865708 kernel: audit: type=1327 audit(1768824536.719:528): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:08:56.719000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:08:56.767000 audit[3290]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3290 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:08:56.895444 kernel: audit: type=1325 audit(1768824536.767:529): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3290 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:08:56.767000 audit[3290]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc22689940 a2=0 a3=0 items=0 ppid=3029 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:56.767000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:08:58.898458 kubelet[2862]: I0119 12:08:58.895498 2862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-fvhml" podStartSLOduration=20.434547749 podStartE2EDuration="26.895464631s" podCreationTimestamp="2026-01-19 12:08:32 +0000 UTC" firstStartedPulling="2026-01-19 12:08:33.496677936 +0000 UTC m=+6.747210688" lastFinishedPulling="2026-01-19 12:08:39.957594808 +0000 UTC m=+13.208127570" observedRunningTime="2026-01-19 12:08:40.705528738 +0000 UTC m=+13.956061490" watchObservedRunningTime="2026-01-19 12:08:58.895464631 +0000 UTC m=+32.145997382" Jan 19 12:08:58.943398 systemd[1]: Created slice kubepods-besteffort-pod5cfaa640_a599_4717_9819_ba721b51c103.slice - libcontainer container kubepods-besteffort-pod5cfaa640_a599_4717_9819_ba721b51c103.slice. Jan 19 12:08:58.972768 kubelet[2862]: I0119 12:08:58.971852 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cfaa640-a599-4717-9819-ba721b51c103-tigera-ca-bundle\") pod \"calico-typha-6dc949b746-5ghcj\" (UID: \"5cfaa640-a599-4717-9819-ba721b51c103\") " pod="calico-system/calico-typha-6dc949b746-5ghcj" Jan 19 12:08:58.972768 kubelet[2862]: I0119 12:08:58.971883 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5cfaa640-a599-4717-9819-ba721b51c103-typha-certs\") pod \"calico-typha-6dc949b746-5ghcj\" (UID: \"5cfaa640-a599-4717-9819-ba721b51c103\") " pod="calico-system/calico-typha-6dc949b746-5ghcj" Jan 19 12:08:58.972768 kubelet[2862]: I0119 12:08:58.971897 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lltzf\" (UniqueName: \"kubernetes.io/projected/5cfaa640-a599-4717-9819-ba721b51c103-kube-api-access-lltzf\") pod \"calico-typha-6dc949b746-5ghcj\" (UID: \"5cfaa640-a599-4717-9819-ba721b51c103\") " pod="calico-system/calico-typha-6dc949b746-5ghcj" Jan 19 12:08:59.005000 audit[3292]: NETFILTER_CFG table=filter:113 family=2 entries=20 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:08:59.005000 audit[3292]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc62c33930 a2=0 a3=7ffc62c3391c items=0 ppid=3029 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:59.005000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:08:59.012000 audit[3292]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:08:59.012000 audit[3292]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc62c33930 a2=0 a3=0 items=0 ppid=3029 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:59.012000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:08:59.178842 systemd[1]: Created slice kubepods-besteffort-pod39bd7f74_187e_4404_8c1b_6b592840cbb8.slice - libcontainer container kubepods-besteffort-pod39bd7f74_187e_4404_8c1b_6b592840cbb8.slice. Jan 19 12:08:59.263813 kubelet[2862]: E0119 12:08:59.263686 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:59.270746 containerd[1618]: time="2026-01-19T12:08:59.270439590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dc949b746-5ghcj,Uid:5cfaa640-a599-4717-9819-ba721b51c103,Namespace:calico-system,Attempt:0,}" Jan 19 12:08:59.276564 kubelet[2862]: I0119 12:08:59.275845 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39bd7f74-187e-4404-8c1b-6b592840cbb8-lib-modules\") pod \"calico-node-nh77m\" (UID: \"39bd7f74-187e-4404-8c1b-6b592840cbb8\") " pod="calico-system/calico-node-nh77m" Jan 19 12:08:59.276564 kubelet[2862]: I0119 12:08:59.275876 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/39bd7f74-187e-4404-8c1b-6b592840cbb8-cni-bin-dir\") pod \"calico-node-nh77m\" (UID: \"39bd7f74-187e-4404-8c1b-6b592840cbb8\") " pod="calico-system/calico-node-nh77m" Jan 19 12:08:59.276564 kubelet[2862]: I0119 12:08:59.275891 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/39bd7f74-187e-4404-8c1b-6b592840cbb8-var-lib-calico\") pod \"calico-node-nh77m\" (UID: \"39bd7f74-187e-4404-8c1b-6b592840cbb8\") " pod="calico-system/calico-node-nh77m" Jan 19 12:08:59.276564 kubelet[2862]: I0119 12:08:59.275904 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/39bd7f74-187e-4404-8c1b-6b592840cbb8-cni-net-dir\") pod \"calico-node-nh77m\" (UID: \"39bd7f74-187e-4404-8c1b-6b592840cbb8\") " pod="calico-system/calico-node-nh77m" Jan 19 12:08:59.276564 kubelet[2862]: I0119 12:08:59.275921 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39bd7f74-187e-4404-8c1b-6b592840cbb8-tigera-ca-bundle\") pod \"calico-node-nh77m\" (UID: \"39bd7f74-187e-4404-8c1b-6b592840cbb8\") " pod="calico-system/calico-node-nh77m" Jan 19 12:08:59.276785 kubelet[2862]: I0119 12:08:59.275935 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/39bd7f74-187e-4404-8c1b-6b592840cbb8-var-run-calico\") pod \"calico-node-nh77m\" (UID: \"39bd7f74-187e-4404-8c1b-6b592840cbb8\") " pod="calico-system/calico-node-nh77m" Jan 19 12:08:59.276785 kubelet[2862]: I0119 12:08:59.275949 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/39bd7f74-187e-4404-8c1b-6b592840cbb8-flexvol-driver-host\") pod \"calico-node-nh77m\" (UID: \"39bd7f74-187e-4404-8c1b-6b592840cbb8\") " pod="calico-system/calico-node-nh77m" Jan 19 12:08:59.276785 kubelet[2862]: I0119 12:08:59.275964 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/39bd7f74-187e-4404-8c1b-6b592840cbb8-cni-log-dir\") pod \"calico-node-nh77m\" (UID: \"39bd7f74-187e-4404-8c1b-6b592840cbb8\") " pod="calico-system/calico-node-nh77m" Jan 19 12:08:59.276785 kubelet[2862]: I0119 12:08:59.275977 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/39bd7f74-187e-4404-8c1b-6b592840cbb8-node-certs\") pod \"calico-node-nh77m\" (UID: \"39bd7f74-187e-4404-8c1b-6b592840cbb8\") " pod="calico-system/calico-node-nh77m" Jan 19 12:08:59.276785 kubelet[2862]: I0119 12:08:59.275989 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/39bd7f74-187e-4404-8c1b-6b592840cbb8-policysync\") pod \"calico-node-nh77m\" (UID: \"39bd7f74-187e-4404-8c1b-6b592840cbb8\") " pod="calico-system/calico-node-nh77m" Jan 19 12:08:59.276938 kubelet[2862]: I0119 12:08:59.276000 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39bd7f74-187e-4404-8c1b-6b592840cbb8-xtables-lock\") pod \"calico-node-nh77m\" (UID: \"39bd7f74-187e-4404-8c1b-6b592840cbb8\") " pod="calico-system/calico-node-nh77m" Jan 19 12:08:59.276938 kubelet[2862]: I0119 12:08:59.276013 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp2hm\" (UniqueName: \"kubernetes.io/projected/39bd7f74-187e-4404-8c1b-6b592840cbb8-kube-api-access-bp2hm\") pod \"calico-node-nh77m\" (UID: \"39bd7f74-187e-4404-8c1b-6b592840cbb8\") " pod="calico-system/calico-node-nh77m" Jan 19 12:08:59.382909 kubelet[2862]: E0119 12:08:59.382639 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sh4c8" podUID="fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5" Jan 19 12:08:59.382909 kubelet[2862]: E0119 12:08:59.382866 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.382909 kubelet[2862]: W0119 12:08:59.382883 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.382909 kubelet[2862]: E0119 12:08:59.382904 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.391786 kubelet[2862]: E0119 12:08:59.391378 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.391786 kubelet[2862]: W0119 12:08:59.391397 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.391786 kubelet[2862]: E0119 12:08:59.391412 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.400965 kubelet[2862]: E0119 12:08:59.399407 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.400965 kubelet[2862]: W0119 12:08:59.399426 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.400965 kubelet[2862]: E0119 12:08:59.399445 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.408500 kubelet[2862]: E0119 12:08:59.405829 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.408500 kubelet[2862]: W0119 12:08:59.405845 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.408500 kubelet[2862]: E0119 12:08:59.405857 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.408801 kubelet[2862]: E0119 12:08:59.408788 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.408852 kubelet[2862]: W0119 12:08:59.408841 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.408895 kubelet[2862]: E0119 12:08:59.408885 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.433724 kubelet[2862]: E0119 12:08:59.432665 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.433846 kubelet[2862]: W0119 12:08:59.433827 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.433908 kubelet[2862]: E0119 12:08:59.433896 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.438335 kubelet[2862]: E0119 12:08:59.435688 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.438335 kubelet[2862]: W0119 12:08:59.435705 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.438335 kubelet[2862]: E0119 12:08:59.435725 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.440776 kubelet[2862]: E0119 12:08:59.440626 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.440833 kubelet[2862]: W0119 12:08:59.440777 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.440833 kubelet[2862]: E0119 12:08:59.440795 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.443818 kubelet[2862]: E0119 12:08:59.442823 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.443818 kubelet[2862]: W0119 12:08:59.442969 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.443818 kubelet[2862]: E0119 12:08:59.442982 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.445964 kubelet[2862]: E0119 12:08:59.445803 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.445964 kubelet[2862]: W0119 12:08:59.445942 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.445964 kubelet[2862]: E0119 12:08:59.445955 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.450531 kubelet[2862]: E0119 12:08:59.447532 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.450531 kubelet[2862]: W0119 12:08:59.447676 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.450531 kubelet[2862]: E0119 12:08:59.447693 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.450531 kubelet[2862]: E0119 12:08:59.450496 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.450531 kubelet[2862]: W0119 12:08:59.450505 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.450531 kubelet[2862]: E0119 12:08:59.450515 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.455354 containerd[1618]: time="2026-01-19T12:08:59.452544707Z" level=info msg="connecting to shim 0d6bcbea1ffca4c9f2f4d620fbfbab05057db2e0185d459ef39bac9fea821ea1" address="unix:///run/containerd/s/a111dd03326980a4cf8176c6a64ce86c8e83a4d47bdb82c604e329a7e53221dd" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:08:59.455441 kubelet[2862]: E0119 12:08:59.453567 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.455441 kubelet[2862]: W0119 12:08:59.453580 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.455441 kubelet[2862]: E0119 12:08:59.453594 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.461016 kubelet[2862]: E0119 12:08:59.457389 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.461016 kubelet[2862]: W0119 12:08:59.457537 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.461016 kubelet[2862]: E0119 12:08:59.457559 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.461016 kubelet[2862]: E0119 12:08:59.458459 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.461016 kubelet[2862]: W0119 12:08:59.458468 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.461016 kubelet[2862]: E0119 12:08:59.458478 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.465662 kubelet[2862]: E0119 12:08:59.463718 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.465662 kubelet[2862]: W0119 12:08:59.463861 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.465662 kubelet[2862]: E0119 12:08:59.463873 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.465662 kubelet[2862]: E0119 12:08:59.464475 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.465662 kubelet[2862]: W0119 12:08:59.464484 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.465662 kubelet[2862]: E0119 12:08:59.464493 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.466950 kubelet[2862]: E0119 12:08:59.466485 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.466950 kubelet[2862]: W0119 12:08:59.466622 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.466950 kubelet[2862]: E0119 12:08:59.466634 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.468946 kubelet[2862]: E0119 12:08:59.467890 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.468946 kubelet[2862]: W0119 12:08:59.468060 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.472893 kubelet[2862]: E0119 12:08:59.471951 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.482640 kubelet[2862]: E0119 12:08:59.482595 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.482640 kubelet[2862]: W0119 12:08:59.482617 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.482640 kubelet[2862]: E0119 12:08:59.482636 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.483865 kubelet[2862]: E0119 12:08:59.483687 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.483865 kubelet[2862]: W0119 12:08:59.483704 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.483865 kubelet[2862]: E0119 12:08:59.483720 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.485702 kubelet[2862]: E0119 12:08:59.484781 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.485702 kubelet[2862]: W0119 12:08:59.484927 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.485702 kubelet[2862]: E0119 12:08:59.484946 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.495784 kubelet[2862]: E0119 12:08:59.495549 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.495784 kubelet[2862]: W0119 12:08:59.495779 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.495864 kubelet[2862]: E0119 12:08:59.495801 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.496533 kubelet[2862]: E0119 12:08:59.496390 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.496533 kubelet[2862]: W0119 12:08:59.496531 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.503528 kubelet[2862]: E0119 12:08:59.502520 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.506479 kubelet[2862]: E0119 12:08:59.505846 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.506479 kubelet[2862]: W0119 12:08:59.505858 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.506479 kubelet[2862]: E0119 12:08:59.505870 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.506586 kubelet[2862]: E0119 12:08:59.506497 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.506586 kubelet[2862]: W0119 12:08:59.506506 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.506586 kubelet[2862]: E0119 12:08:59.506517 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.510606 kubelet[2862]: E0119 12:08:59.510044 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.512640 kubelet[2862]: W0119 12:08:59.512054 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.512640 kubelet[2862]: E0119 12:08:59.512356 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.515751 kubelet[2862]: E0119 12:08:59.515735 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.515751 kubelet[2862]: W0119 12:08:59.515747 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.515877 kubelet[2862]: E0119 12:08:59.515757 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.517483 kubelet[2862]: E0119 12:08:59.516849 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.517483 kubelet[2862]: W0119 12:08:59.516860 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.517483 kubelet[2862]: E0119 12:08:59.516870 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.522814 kubelet[2862]: E0119 12:08:59.522437 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.522814 kubelet[2862]: W0119 12:08:59.522570 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.522814 kubelet[2862]: E0119 12:08:59.522580 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.523564 kubelet[2862]: E0119 12:08:59.523493 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.524579 kubelet[2862]: W0119 12:08:59.524560 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.524975 kubelet[2862]: E0119 12:08:59.524924 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.526713 kubelet[2862]: E0119 12:08:59.526666 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.526713 kubelet[2862]: W0119 12:08:59.526682 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.526713 kubelet[2862]: E0119 12:08:59.526697 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.528593 kubelet[2862]: E0119 12:08:59.527916 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.528593 kubelet[2862]: W0119 12:08:59.527930 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.528593 kubelet[2862]: E0119 12:08:59.527942 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.534513 kubelet[2862]: E0119 12:08:59.534000 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.536489 kubelet[2862]: W0119 12:08:59.536451 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.536489 kubelet[2862]: E0119 12:08:59.536474 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.548566 kubelet[2862]: E0119 12:08:59.548544 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.548665 kubelet[2862]: W0119 12:08:59.548653 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.548732 kubelet[2862]: E0119 12:08:59.548719 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.551511 kubelet[2862]: E0119 12:08:59.551497 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.551581 kubelet[2862]: W0119 12:08:59.551570 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.551627 kubelet[2862]: E0119 12:08:59.551618 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.551909 kubelet[2862]: E0119 12:08:59.551899 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.551966 kubelet[2862]: W0119 12:08:59.551957 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.552018 kubelet[2862]: E0119 12:08:59.552008 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.552566 kubelet[2862]: E0119 12:08:59.552554 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.552625 kubelet[2862]: W0119 12:08:59.552615 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.552670 kubelet[2862]: E0119 12:08:59.552661 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.554068 kubelet[2862]: E0119 12:08:59.554054 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.554415 kubelet[2862]: W0119 12:08:59.554402 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.555350 kubelet[2862]: E0119 12:08:59.555069 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.555883 kubelet[2862]: E0119 12:08:59.555870 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.555936 kubelet[2862]: W0119 12:08:59.555926 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.555987 kubelet[2862]: E0119 12:08:59.555978 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.558753 kubelet[2862]: E0119 12:08:59.557528 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.558753 kubelet[2862]: W0119 12:08:59.557542 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.558753 kubelet[2862]: E0119 12:08:59.557552 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.560762 kubelet[2862]: E0119 12:08:59.560747 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.560828 kubelet[2862]: W0119 12:08:59.560815 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.560881 kubelet[2862]: E0119 12:08:59.560871 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.567929 kubelet[2862]: E0119 12:08:59.565587 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.567929 kubelet[2862]: W0119 12:08:59.565743 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.567929 kubelet[2862]: E0119 12:08:59.565767 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.567929 kubelet[2862]: E0119 12:08:59.567488 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.567929 kubelet[2862]: W0119 12:08:59.567498 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.567929 kubelet[2862]: E0119 12:08:59.567511 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.568814 kubelet[2862]: E0119 12:08:59.568602 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.568814 kubelet[2862]: W0119 12:08:59.568756 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.568814 kubelet[2862]: E0119 12:08:59.568768 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.569836 kubelet[2862]: E0119 12:08:59.569650 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.569836 kubelet[2862]: W0119 12:08:59.569803 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.569836 kubelet[2862]: E0119 12:08:59.569814 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.571560 kubelet[2862]: E0119 12:08:59.571338 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.571560 kubelet[2862]: W0119 12:08:59.571471 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.571560 kubelet[2862]: E0119 12:08:59.571483 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.572725 kubelet[2862]: E0119 12:08:59.572463 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.572725 kubelet[2862]: W0119 12:08:59.572596 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.572725 kubelet[2862]: E0119 12:08:59.572608 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.573722 kubelet[2862]: E0119 12:08:59.572970 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.573722 kubelet[2862]: W0119 12:08:59.573615 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.573722 kubelet[2862]: E0119 12:08:59.573628 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.575943 kubelet[2862]: E0119 12:08:59.575768 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.575943 kubelet[2862]: W0119 12:08:59.575898 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.575943 kubelet[2862]: E0119 12:08:59.575909 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.575943 kubelet[2862]: I0119 12:08:59.575935 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5-registration-dir\") pod \"csi-node-driver-sh4c8\" (UID: \"fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5\") " pod="calico-system/csi-node-driver-sh4c8" Jan 19 12:08:59.579439 kubelet[2862]: E0119 12:08:59.578638 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.579439 kubelet[2862]: W0119 12:08:59.578652 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.579439 kubelet[2862]: E0119 12:08:59.578663 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.579439 kubelet[2862]: I0119 12:08:59.578973 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5-kubelet-dir\") pod \"csi-node-driver-sh4c8\" (UID: \"fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5\") " pod="calico-system/csi-node-driver-sh4c8" Jan 19 12:08:59.579439 kubelet[2862]: E0119 12:08:59.578995 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.579439 kubelet[2862]: W0119 12:08:59.579004 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.579439 kubelet[2862]: E0119 12:08:59.579014 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.579968 kubelet[2862]: E0119 12:08:59.579696 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.579968 kubelet[2862]: W0119 12:08:59.579834 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.579968 kubelet[2862]: E0119 12:08:59.579844 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.580604 kubelet[2862]: E0119 12:08:59.580045 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.580604 kubelet[2862]: W0119 12:08:59.580060 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.580604 kubelet[2862]: E0119 12:08:59.580357 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.580604 kubelet[2862]: E0119 12:08:59.580560 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.580604 kubelet[2862]: W0119 12:08:59.580570 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.580604 kubelet[2862]: E0119 12:08:59.580579 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.580972 kubelet[2862]: E0119 12:08:59.580809 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.580972 kubelet[2862]: W0119 12:08:59.580950 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.580972 kubelet[2862]: E0119 12:08:59.580962 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.581713 kubelet[2862]: E0119 12:08:59.581557 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.581713 kubelet[2862]: W0119 12:08:59.581704 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.581764 kubelet[2862]: E0119 12:08:59.581716 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.582746 kubelet[2862]: E0119 12:08:59.582593 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.582746 kubelet[2862]: W0119 12:08:59.582726 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.582746 kubelet[2862]: E0119 12:08:59.582740 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.584661 kubelet[2862]: E0119 12:08:59.584021 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.584661 kubelet[2862]: W0119 12:08:59.584549 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.584661 kubelet[2862]: E0119 12:08:59.584564 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.584955 kubelet[2862]: E0119 12:08:59.584801 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.584955 kubelet[2862]: W0119 12:08:59.584943 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.585003 kubelet[2862]: E0119 12:08:59.584962 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.586040 kubelet[2862]: E0119 12:08:59.585904 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.586346 kubelet[2862]: W0119 12:08:59.586046 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.586346 kubelet[2862]: E0119 12:08:59.586061 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.589453 kubelet[2862]: E0119 12:08:59.587918 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.589453 kubelet[2862]: W0119 12:08:59.587932 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.589453 kubelet[2862]: E0119 12:08:59.587941 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.589453 kubelet[2862]: E0119 12:08:59.588735 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.589453 kubelet[2862]: W0119 12:08:59.588745 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.589453 kubelet[2862]: E0119 12:08:59.588756 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.589453 kubelet[2862]: E0119 12:08:59.588982 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.589453 kubelet[2862]: W0119 12:08:59.588992 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.589453 kubelet[2862]: E0119 12:08:59.589003 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.591583 kubelet[2862]: E0119 12:08:59.591057 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.591828 kubelet[2862]: W0119 12:08:59.591669 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.591828 kubelet[2862]: E0119 12:08:59.591823 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.670923 systemd[1]: Started cri-containerd-0d6bcbea1ffca4c9f2f4d620fbfbab05057db2e0185d459ef39bac9fea821ea1.scope - libcontainer container 0d6bcbea1ffca4c9f2f4d620fbfbab05057db2e0185d459ef39bac9fea821ea1. Jan 19 12:08:59.684874 kubelet[2862]: E0119 12:08:59.684615 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.684874 kubelet[2862]: W0119 12:08:59.684635 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.684874 kubelet[2862]: E0119 12:08:59.684652 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.686018 kubelet[2862]: I0119 12:08:59.685551 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z5ff\" (UniqueName: \"kubernetes.io/projected/fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5-kube-api-access-2z5ff\") pod \"csi-node-driver-sh4c8\" (UID: \"fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5\") " pod="calico-system/csi-node-driver-sh4c8" Jan 19 12:08:59.689711 kubelet[2862]: E0119 12:08:59.689022 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.690392 kubelet[2862]: W0119 12:08:59.689812 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.690392 kubelet[2862]: E0119 12:08:59.689830 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.693609 kubelet[2862]: E0119 12:08:59.692934 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.693800 kubelet[2862]: W0119 12:08:59.693657 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.694488 kubelet[2862]: E0119 12:08:59.694048 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.698509 kubelet[2862]: E0119 12:08:59.696646 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.698509 kubelet[2862]: W0119 12:08:59.696659 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.698509 kubelet[2862]: E0119 12:08:59.696669 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.698509 kubelet[2862]: I0119 12:08:59.698065 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5-varrun\") pod \"csi-node-driver-sh4c8\" (UID: \"fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5\") " pod="calico-system/csi-node-driver-sh4c8" Jan 19 12:08:59.698609 kubelet[2862]: E0119 12:08:59.698528 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.698609 kubelet[2862]: W0119 12:08:59.698536 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.698609 kubelet[2862]: E0119 12:08:59.698545 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.701381 kubelet[2862]: E0119 12:08:59.700774 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.701381 kubelet[2862]: W0119 12:08:59.700906 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.701381 kubelet[2862]: E0119 12:08:59.700920 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.703654 kubelet[2862]: E0119 12:08:59.702996 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.703654 kubelet[2862]: W0119 12:08:59.703476 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.703654 kubelet[2862]: E0119 12:08:59.703487 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.705604 kubelet[2862]: E0119 12:08:59.705466 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.705604 kubelet[2862]: W0119 12:08:59.705594 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.705604 kubelet[2862]: E0119 12:08:59.705605 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.708370 kubelet[2862]: E0119 12:08:59.706510 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.708370 kubelet[2862]: W0119 12:08:59.706523 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.708370 kubelet[2862]: E0119 12:08:59.706532 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.708906 kubelet[2862]: E0119 12:08:59.708624 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.708906 kubelet[2862]: W0119 12:08:59.708754 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.708906 kubelet[2862]: E0119 12:08:59.708763 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.710872 kubelet[2862]: E0119 12:08:59.710685 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.710872 kubelet[2862]: W0119 12:08:59.710822 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.710872 kubelet[2862]: E0119 12:08:59.710835 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.716691 kubelet[2862]: E0119 12:08:59.712989 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.716691 kubelet[2862]: W0119 12:08:59.713541 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.716691 kubelet[2862]: E0119 12:08:59.713564 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.716691 kubelet[2862]: E0119 12:08:59.716521 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.716691 kubelet[2862]: W0119 12:08:59.716531 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.716691 kubelet[2862]: E0119 12:08:59.716543 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.716691 kubelet[2862]: I0119 12:08:59.716696 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5-socket-dir\") pod \"csi-node-driver-sh4c8\" (UID: \"fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5\") " pod="calico-system/csi-node-driver-sh4c8" Jan 19 12:08:59.718628 kubelet[2862]: E0119 12:08:59.718351 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.718628 kubelet[2862]: W0119 12:08:59.718488 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.718628 kubelet[2862]: E0119 12:08:59.718499 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.718744 kubelet[2862]: E0119 12:08:59.718681 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.718744 kubelet[2862]: W0119 12:08:59.718688 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.718744 kubelet[2862]: E0119 12:08:59.718696 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.719611 kubelet[2862]: E0119 12:08:59.718843 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.719611 kubelet[2862]: W0119 12:08:59.718977 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.719611 kubelet[2862]: E0119 12:08:59.718985 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.719611 kubelet[2862]: E0119 12:08:59.719553 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.719611 kubelet[2862]: W0119 12:08:59.719561 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.719611 kubelet[2862]: E0119 12:08:59.719569 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.719909 kubelet[2862]: E0119 12:08:59.719747 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.719909 kubelet[2862]: W0119 12:08:59.719885 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.719909 kubelet[2862]: E0119 12:08:59.719895 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.720728 kubelet[2862]: E0119 12:08:59.720438 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.720728 kubelet[2862]: W0119 12:08:59.720568 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.720728 kubelet[2862]: E0119 12:08:59.720578 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.722000 audit: BPF prog-id=151 op=LOAD Jan 19 12:08:59.724000 audit: BPF prog-id=152 op=LOAD Jan 19 12:08:59.724000 audit[3363]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3311 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:59.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064366263626561316666636134633966326634643632306662666261 Jan 19 12:08:59.724000 audit: BPF prog-id=152 op=UNLOAD Jan 19 12:08:59.724000 audit[3363]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3311 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:59.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064366263626561316666636134633966326634643632306662666261 Jan 19 12:08:59.724000 audit: BPF prog-id=153 op=LOAD Jan 19 12:08:59.724000 audit[3363]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3311 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:59.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064366263626561316666636134633966326634643632306662666261 Jan 19 12:08:59.724000 audit: BPF prog-id=154 op=LOAD Jan 19 12:08:59.724000 audit[3363]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3311 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:59.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064366263626561316666636134633966326634643632306662666261 Jan 19 12:08:59.725000 audit: BPF prog-id=154 op=UNLOAD Jan 19 12:08:59.725000 audit[3363]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3311 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:59.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064366263626561316666636134633966326634643632306662666261 Jan 19 12:08:59.725000 audit: BPF prog-id=153 op=UNLOAD Jan 19 12:08:59.725000 audit[3363]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3311 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:59.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064366263626561316666636134633966326634643632306662666261 Jan 19 12:08:59.727000 audit: BPF prog-id=155 op=LOAD Jan 19 12:08:59.727000 audit[3363]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3311 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:08:59.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064366263626561316666636134633966326634643632306662666261 Jan 19 12:08:59.801668 kubelet[2862]: E0119 12:08:59.800432 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:59.806601 containerd[1618]: time="2026-01-19T12:08:59.806561115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nh77m,Uid:39bd7f74-187e-4404-8c1b-6b592840cbb8,Namespace:calico-system,Attempt:0,}" Jan 19 12:08:59.821535 kubelet[2862]: E0119 12:08:59.820802 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.821535 kubelet[2862]: W0119 12:08:59.820825 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.821535 kubelet[2862]: E0119 12:08:59.820844 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.824498 kubelet[2862]: E0119 12:08:59.823806 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.824498 kubelet[2862]: W0119 12:08:59.823875 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.824498 kubelet[2862]: E0119 12:08:59.823888 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.832901 kubelet[2862]: E0119 12:08:59.832793 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.832901 kubelet[2862]: W0119 12:08:59.832809 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.832901 kubelet[2862]: E0119 12:08:59.832822 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.834795 kubelet[2862]: E0119 12:08:59.833520 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.834795 kubelet[2862]: W0119 12:08:59.834034 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.834795 kubelet[2862]: E0119 12:08:59.834054 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.836317 kubelet[2862]: E0119 12:08:59.835700 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.836317 kubelet[2862]: W0119 12:08:59.835844 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.836317 kubelet[2862]: E0119 12:08:59.835857 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.840452 kubelet[2862]: E0119 12:08:59.839739 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.840452 kubelet[2862]: W0119 12:08:59.839886 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.840452 kubelet[2862]: E0119 12:08:59.839899 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.843589 kubelet[2862]: E0119 12:08:59.841545 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.843589 kubelet[2862]: W0119 12:08:59.841685 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.843589 kubelet[2862]: E0119 12:08:59.841698 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.845940 kubelet[2862]: E0119 12:08:59.845048 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.847576 kubelet[2862]: W0119 12:08:59.846733 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.847576 kubelet[2862]: E0119 12:08:59.846912 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.849753 kubelet[2862]: E0119 12:08:59.849507 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.849753 kubelet[2862]: W0119 12:08:59.849521 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.849753 kubelet[2862]: E0119 12:08:59.849531 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.851574 kubelet[2862]: E0119 12:08:59.850794 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.851574 kubelet[2862]: W0119 12:08:59.850939 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.851574 kubelet[2862]: E0119 12:08:59.850949 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.856761 kubelet[2862]: E0119 12:08:59.856659 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.856761 kubelet[2862]: W0119 12:08:59.856670 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.856761 kubelet[2862]: E0119 12:08:59.856680 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.859481 kubelet[2862]: E0119 12:08:59.858745 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.859481 kubelet[2862]: W0119 12:08:59.858900 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.859481 kubelet[2862]: E0119 12:08:59.858912 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.861663 kubelet[2862]: E0119 12:08:59.861407 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.861663 kubelet[2862]: W0119 12:08:59.861547 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.861663 kubelet[2862]: E0119 12:08:59.861557 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.865333 kubelet[2862]: E0119 12:08:59.863012 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.865333 kubelet[2862]: W0119 12:08:59.864618 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.865333 kubelet[2862]: E0119 12:08:59.864634 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.867424 kubelet[2862]: E0119 12:08:59.866860 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.867424 kubelet[2862]: W0119 12:08:59.867002 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.867424 kubelet[2862]: E0119 12:08:59.867013 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.911559 kubelet[2862]: E0119 12:08:59.910728 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:08:59.911559 kubelet[2862]: W0119 12:08:59.910748 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:08:59.911559 kubelet[2862]: E0119 12:08:59.910765 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:08:59.912507 containerd[1618]: time="2026-01-19T12:08:59.911028405Z" level=info msg="connecting to shim 1f718ce4f34d86d9bcbcb0e24eee4742ac8802cdae241b83bd778942893afca4" address="unix:///run/containerd/s/3447015fc15e0140b51f1e925277d7c818a57ec9fc8391dff98ab4ec53c751a5" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:08:59.913865 containerd[1618]: time="2026-01-19T12:08:59.913837006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dc949b746-5ghcj,Uid:5cfaa640-a599-4717-9819-ba721b51c103,Namespace:calico-system,Attempt:0,} returns sandbox id \"0d6bcbea1ffca4c9f2f4d620fbfbab05057db2e0185d459ef39bac9fea821ea1\"" Jan 19 12:08:59.916744 kubelet[2862]: E0119 12:08:59.916727 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:08:59.919047 containerd[1618]: time="2026-01-19T12:08:59.919020756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 19 12:09:00.067562 systemd[1]: Started cri-containerd-1f718ce4f34d86d9bcbcb0e24eee4742ac8802cdae241b83bd778942893afca4.scope - libcontainer container 1f718ce4f34d86d9bcbcb0e24eee4742ac8802cdae241b83bd778942893afca4. Jan 19 12:09:00.120000 audit[3480]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3480 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:09:00.122000 audit: BPF prog-id=156 op=LOAD Jan 19 12:09:00.120000 audit[3480]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc258ffbf0 a2=0 a3=7ffc258ffbdc items=0 ppid=3029 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:00.120000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:09:00.128000 audit[3480]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3480 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:09:00.128000 audit[3480]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc258ffbf0 a2=0 a3=0 items=0 ppid=3029 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:00.129000 audit: BPF prog-id=157 op=LOAD Jan 19 12:09:00.128000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:09:00.129000 audit[3467]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=3454 pid=3467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:00.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166373138636534663334643836643962636263623065323465656534 Jan 19 12:09:00.129000 audit: BPF prog-id=157 op=UNLOAD Jan 19 12:09:00.129000 audit[3467]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3454 pid=3467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:00.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166373138636534663334643836643962636263623065323465656534 Jan 19 12:09:00.129000 audit: BPF prog-id=158 op=LOAD Jan 19 12:09:00.129000 audit[3467]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3454 pid=3467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:00.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166373138636534663334643836643962636263623065323465656534 Jan 19 12:09:00.131000 audit: BPF prog-id=159 op=LOAD Jan 19 12:09:00.131000 audit[3467]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3454 pid=3467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:00.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166373138636534663334643836643962636263623065323465656534 Jan 19 12:09:00.131000 audit: BPF prog-id=159 op=UNLOAD Jan 19 12:09:00.131000 audit[3467]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3454 pid=3467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:00.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166373138636534663334643836643962636263623065323465656534 Jan 19 12:09:00.131000 audit: BPF prog-id=158 op=UNLOAD Jan 19 12:09:00.131000 audit[3467]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3454 pid=3467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:00.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166373138636534663334643836643962636263623065323465656534 Jan 19 12:09:00.131000 audit: BPF prog-id=160 op=LOAD Jan 19 12:09:00.131000 audit[3467]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3454 pid=3467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:00.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166373138636534663334643836643962636263623065323465656534 Jan 19 12:09:00.318878 containerd[1618]: time="2026-01-19T12:09:00.318385274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nh77m,Uid:39bd7f74-187e-4404-8c1b-6b592840cbb8,Namespace:calico-system,Attempt:0,} returns sandbox id \"1f718ce4f34d86d9bcbcb0e24eee4742ac8802cdae241b83bd778942893afca4\"" Jan 19 12:09:00.322878 kubelet[2862]: E0119 12:09:00.321972 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:09:01.259764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1888475138.mount: Deactivated successfully. Jan 19 12:09:01.429611 kubelet[2862]: E0119 12:09:01.428061 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sh4c8" podUID="fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5" Jan 19 12:09:03.434691 kubelet[2862]: E0119 12:09:03.434655 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sh4c8" podUID="fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5" Jan 19 12:09:04.845600 containerd[1618]: time="2026-01-19T12:09:04.844968029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:09:04.849419 containerd[1618]: time="2026-01-19T12:09:04.849390686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35230631" Jan 19 12:09:04.853734 containerd[1618]: time="2026-01-19T12:09:04.853703001Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:09:04.857794 containerd[1618]: time="2026-01-19T12:09:04.857770992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:09:04.858472 containerd[1618]: time="2026-01-19T12:09:04.858449862Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 4.939012597s" Jan 19 12:09:04.858542 containerd[1618]: time="2026-01-19T12:09:04.858529150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 19 12:09:04.863413 containerd[1618]: time="2026-01-19T12:09:04.861565135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 19 12:09:04.925535 containerd[1618]: time="2026-01-19T12:09:04.924647984Z" level=info msg="CreateContainer within sandbox \"0d6bcbea1ffca4c9f2f4d620fbfbab05057db2e0185d459ef39bac9fea821ea1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 19 12:09:04.954448 containerd[1618]: time="2026-01-19T12:09:04.953595251Z" level=info msg="Container 098f930d50ba5c0301010366a564c9f0d30f61f881c811bd6c64596730b0d397: CDI devices from CRI Config.CDIDevices: []" Jan 19 12:09:04.982511 containerd[1618]: time="2026-01-19T12:09:04.981823930Z" level=info msg="CreateContainer within sandbox \"0d6bcbea1ffca4c9f2f4d620fbfbab05057db2e0185d459ef39bac9fea821ea1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"098f930d50ba5c0301010366a564c9f0d30f61f881c811bd6c64596730b0d397\"" Jan 19 12:09:04.987461 containerd[1618]: time="2026-01-19T12:09:04.986668456Z" level=info msg="StartContainer for \"098f930d50ba5c0301010366a564c9f0d30f61f881c811bd6c64596730b0d397\"" Jan 19 12:09:04.989940 containerd[1618]: time="2026-01-19T12:09:04.988415902Z" level=info msg="connecting to shim 098f930d50ba5c0301010366a564c9f0d30f61f881c811bd6c64596730b0d397" address="unix:///run/containerd/s/a111dd03326980a4cf8176c6a64ce86c8e83a4d47bdb82c604e329a7e53221dd" protocol=ttrpc version=3 Jan 19 12:09:05.150368 systemd[1]: Started cri-containerd-098f930d50ba5c0301010366a564c9f0d30f61f881c811bd6c64596730b0d397.scope - libcontainer container 098f930d50ba5c0301010366a564c9f0d30f61f881c811bd6c64596730b0d397. Jan 19 12:09:05.245415 kernel: kauditd_printk_skb: 58 callbacks suppressed Jan 19 12:09:05.245512 kernel: audit: type=1334 audit(1768824545.228:550): prog-id=161 op=LOAD Jan 19 12:09:05.228000 audit: BPF prog-id=161 op=LOAD Jan 19 12:09:05.255000 audit: BPF prog-id=162 op=LOAD Jan 19 12:09:05.275760 kernel: audit: type=1334 audit(1768824545.255:551): prog-id=162 op=LOAD Jan 19 12:09:05.255000 audit[3507]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3311 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:05.332554 kernel: audit: type=1300 audit(1768824545.255:551): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3311 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:05.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039386639333064353062613563303330313031303336366135363463 Jan 19 12:09:05.383622 kernel: audit: type=1327 audit(1768824545.255:551): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039386639333064353062613563303330313031303336366135363463 Jan 19 12:09:05.383745 kernel: audit: type=1334 audit(1768824545.255:552): prog-id=162 op=UNLOAD Jan 19 12:09:05.255000 audit: BPF prog-id=162 op=UNLOAD Jan 19 12:09:05.255000 audit[3507]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3311 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:05.429780 kubelet[2862]: E0119 12:09:05.428986 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sh4c8" podUID="fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5" Jan 19 12:09:05.450773 kernel: audit: type=1300 audit(1768824545.255:552): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3311 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:05.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039386639333064353062613563303330313031303336366135363463 Jan 19 12:09:05.256000 audit: BPF prog-id=163 op=LOAD Jan 19 12:09:05.524061 kernel: audit: type=1327 audit(1768824545.255:552): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039386639333064353062613563303330313031303336366135363463 Jan 19 12:09:05.524450 kernel: audit: type=1334 audit(1768824545.256:553): prog-id=163 op=LOAD Jan 19 12:09:05.525889 kernel: audit: type=1300 audit(1768824545.256:553): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3311 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:05.256000 audit[3507]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3311 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:05.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039386639333064353062613563303330313031303336366135363463 Jan 19 12:09:05.651929 containerd[1618]: time="2026-01-19T12:09:05.599715220Z" level=info msg="StartContainer for \"098f930d50ba5c0301010366a564c9f0d30f61f881c811bd6c64596730b0d397\" returns successfully" Jan 19 12:09:05.652819 kernel: audit: type=1327 audit(1768824545.256:553): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039386639333064353062613563303330313031303336366135363463 Jan 19 12:09:05.256000 audit: BPF prog-id=164 op=LOAD Jan 19 12:09:05.256000 audit[3507]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3311 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:05.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039386639333064353062613563303330313031303336366135363463 Jan 19 12:09:05.256000 audit: BPF prog-id=164 op=UNLOAD Jan 19 12:09:05.256000 audit[3507]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3311 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:05.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039386639333064353062613563303330313031303336366135363463 Jan 19 12:09:05.256000 audit: BPF prog-id=163 op=UNLOAD Jan 19 12:09:05.256000 audit[3507]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3311 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:05.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039386639333064353062613563303330313031303336366135363463 Jan 19 12:09:05.256000 audit: BPF prog-id=165 op=LOAD Jan 19 12:09:05.256000 audit[3507]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3311 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:05.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039386639333064353062613563303330313031303336366135363463 Jan 19 12:09:05.939641 kubelet[2862]: E0119 12:09:05.939521 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:09:05.991684 kubelet[2862]: E0119 12:09:05.990701 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:05.991684 kubelet[2862]: W0119 12:09:05.990863 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:05.991684 kubelet[2862]: E0119 12:09:05.991070 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:05.992909 kubelet[2862]: E0119 12:09:05.992766 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:05.992909 kubelet[2862]: W0119 12:09:05.992784 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:05.992909 kubelet[2862]: E0119 12:09:05.992798 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:05.996649 kubelet[2862]: E0119 12:09:05.995917 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:05.996649 kubelet[2862]: W0119 12:09:05.996065 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:05.996649 kubelet[2862]: E0119 12:09:05.996082 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.005798 kubelet[2862]: E0119 12:09:06.005514 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.005798 kubelet[2862]: W0119 12:09:06.005665 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.005798 kubelet[2862]: E0119 12:09:06.005681 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.011762 kubelet[2862]: E0119 12:09:06.011545 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.011762 kubelet[2862]: W0119 12:09:06.011561 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.011762 kubelet[2862]: E0119 12:09:06.011572 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.011762 kubelet[2862]: E0119 12:09:06.011753 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.011762 kubelet[2862]: W0119 12:09:06.011761 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.011762 kubelet[2862]: E0119 12:09:06.011769 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.021558 kubelet[2862]: E0119 12:09:06.018991 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.021558 kubelet[2862]: W0119 12:09:06.019668 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.021558 kubelet[2862]: E0119 12:09:06.019707 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.027829 kubelet[2862]: E0119 12:09:06.023561 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.027829 kubelet[2862]: W0119 12:09:06.023577 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.027829 kubelet[2862]: E0119 12:09:06.023591 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.027829 kubelet[2862]: E0119 12:09:06.025684 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.027829 kubelet[2862]: W0119 12:09:06.025694 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.027829 kubelet[2862]: E0119 12:09:06.025704 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.027829 kubelet[2862]: E0119 12:09:06.027592 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.027829 kubelet[2862]: W0119 12:09:06.027603 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.027829 kubelet[2862]: E0119 12:09:06.027614 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.032423 kubelet[2862]: E0119 12:09:06.029081 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.032423 kubelet[2862]: W0119 12:09:06.029504 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.032423 kubelet[2862]: E0119 12:09:06.029517 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.032423 kubelet[2862]: E0119 12:09:06.029704 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.032423 kubelet[2862]: W0119 12:09:06.029711 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.032423 kubelet[2862]: E0119 12:09:06.029719 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.032423 kubelet[2862]: E0119 12:09:06.029905 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.032423 kubelet[2862]: W0119 12:09:06.029912 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.032423 kubelet[2862]: E0119 12:09:06.029920 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.038860 kubelet[2862]: E0119 12:09:06.036859 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.038860 kubelet[2862]: W0119 12:09:06.036875 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.038860 kubelet[2862]: E0119 12:09:06.036885 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.038860 kubelet[2862]: E0119 12:09:06.037050 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.038860 kubelet[2862]: W0119 12:09:06.037059 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.038860 kubelet[2862]: E0119 12:09:06.037068 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.053968 kubelet[2862]: E0119 12:09:06.051779 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.053968 kubelet[2862]: W0119 12:09:06.051797 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.053968 kubelet[2862]: E0119 12:09:06.051810 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.053968 kubelet[2862]: E0119 12:09:06.052570 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.053968 kubelet[2862]: W0119 12:09:06.052580 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.053968 kubelet[2862]: E0119 12:09:06.052590 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.053968 kubelet[2862]: E0119 12:09:06.052764 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.053968 kubelet[2862]: W0119 12:09:06.052772 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.053968 kubelet[2862]: E0119 12:09:06.052779 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.059835 kubelet[2862]: E0119 12:09:06.053995 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.059835 kubelet[2862]: W0119 12:09:06.054005 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.059835 kubelet[2862]: E0119 12:09:06.054016 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.059835 kubelet[2862]: E0119 12:09:06.054500 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.059835 kubelet[2862]: W0119 12:09:06.054508 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.059835 kubelet[2862]: E0119 12:09:06.054516 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.059835 kubelet[2862]: E0119 12:09:06.054716 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.059835 kubelet[2862]: W0119 12:09:06.054723 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.059835 kubelet[2862]: E0119 12:09:06.054730 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.059835 kubelet[2862]: E0119 12:09:06.059759 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.060039 kubelet[2862]: W0119 12:09:06.059769 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.060039 kubelet[2862]: E0119 12:09:06.059778 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.061976 kubelet[2862]: E0119 12:09:06.061823 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.061976 kubelet[2862]: W0119 12:09:06.061960 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.061976 kubelet[2862]: E0119 12:09:06.061970 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.071682 kubelet[2862]: E0119 12:09:06.066397 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.071682 kubelet[2862]: W0119 12:09:06.066413 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.071682 kubelet[2862]: E0119 12:09:06.066424 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.071682 kubelet[2862]: E0119 12:09:06.068614 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.071682 kubelet[2862]: W0119 12:09:06.068624 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.071682 kubelet[2862]: E0119 12:09:06.068633 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.071682 kubelet[2862]: E0119 12:09:06.069768 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.071682 kubelet[2862]: W0119 12:09:06.069778 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.071682 kubelet[2862]: E0119 12:09:06.069786 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.076642 kubelet[2862]: E0119 12:09:06.075952 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.077873 kubelet[2862]: W0119 12:09:06.077580 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.077873 kubelet[2862]: E0119 12:09:06.077730 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.081427 kubelet[2862]: E0119 12:09:06.078561 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.081427 kubelet[2862]: W0119 12:09:06.078711 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.081427 kubelet[2862]: E0119 12:09:06.078724 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.085751 kubelet[2862]: E0119 12:09:06.084940 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.086788 kubelet[2862]: W0119 12:09:06.086484 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.086788 kubelet[2862]: E0119 12:09:06.086635 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.089745 kubelet[2862]: E0119 12:09:06.088987 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.091710 kubelet[2862]: W0119 12:09:06.090673 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.091710 kubelet[2862]: E0119 12:09:06.090693 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.096481 kubelet[2862]: E0119 12:09:06.095602 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.096481 kubelet[2862]: W0119 12:09:06.095746 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.096481 kubelet[2862]: E0119 12:09:06.095757 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.097795 kubelet[2862]: E0119 12:09:06.097444 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.097795 kubelet[2862]: W0119 12:09:06.097458 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.097795 kubelet[2862]: E0119 12:09:06.097467 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.098736 kubelet[2862]: E0119 12:09:06.097956 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:09:06.098736 kubelet[2862]: W0119 12:09:06.098483 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:09:06.098736 kubelet[2862]: E0119 12:09:06.098495 2862 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:09:06.386614 containerd[1618]: time="2026-01-19T12:09:06.385465239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:09:06.393629 containerd[1618]: time="2026-01-19T12:09:06.392391855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4442579" Jan 19 12:09:06.398865 containerd[1618]: time="2026-01-19T12:09:06.396533680Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:09:06.404824 containerd[1618]: time="2026-01-19T12:09:06.404797185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:09:06.406608 containerd[1618]: time="2026-01-19T12:09:06.406433099Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.544836205s" Jan 19 12:09:06.406608 containerd[1618]: time="2026-01-19T12:09:06.406597446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 19 12:09:06.430785 containerd[1618]: time="2026-01-19T12:09:06.428567900Z" level=info msg="CreateContainer within sandbox \"1f718ce4f34d86d9bcbcb0e24eee4742ac8802cdae241b83bd778942893afca4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 19 12:09:06.484396 containerd[1618]: time="2026-01-19T12:09:06.483734555Z" level=info msg="Container 6cf1e6f6054d7ed8827e3f9f2f5209b4110725cb4b513634009b01845763ac6c: CDI devices from CRI Config.CDIDevices: []" Jan 19 12:09:06.516007 containerd[1618]: time="2026-01-19T12:09:06.515969491Z" level=info msg="CreateContainer within sandbox \"1f718ce4f34d86d9bcbcb0e24eee4742ac8802cdae241b83bd778942893afca4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6cf1e6f6054d7ed8827e3f9f2f5209b4110725cb4b513634009b01845763ac6c\"" Jan 19 12:09:06.524502 containerd[1618]: time="2026-01-19T12:09:06.522876341Z" level=info msg="StartContainer for \"6cf1e6f6054d7ed8827e3f9f2f5209b4110725cb4b513634009b01845763ac6c\"" Jan 19 12:09:06.533799 containerd[1618]: time="2026-01-19T12:09:06.533639047Z" level=info msg="connecting to shim 6cf1e6f6054d7ed8827e3f9f2f5209b4110725cb4b513634009b01845763ac6c" address="unix:///run/containerd/s/3447015fc15e0140b51f1e925277d7c818a57ec9fc8391dff98ab4ec53c751a5" protocol=ttrpc version=3 Jan 19 12:09:06.682500 systemd[1]: Started cri-containerd-6cf1e6f6054d7ed8827e3f9f2f5209b4110725cb4b513634009b01845763ac6c.scope - libcontainer container 6cf1e6f6054d7ed8827e3f9f2f5209b4110725cb4b513634009b01845763ac6c. Jan 19 12:09:06.796000 audit: BPF prog-id=166 op=LOAD Jan 19 12:09:06.796000 audit[3585]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3454 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:06.796000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663663165366636303534643765643838323765336639663266353230 Jan 19 12:09:06.798000 audit: BPF prog-id=167 op=LOAD Jan 19 12:09:06.798000 audit[3585]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3454 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:06.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663663165366636303534643765643838323765336639663266353230 Jan 19 12:09:06.798000 audit: BPF prog-id=167 op=UNLOAD Jan 19 12:09:06.798000 audit[3585]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3454 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:06.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663663165366636303534643765643838323765336639663266353230 Jan 19 12:09:06.798000 audit: BPF prog-id=166 op=UNLOAD Jan 19 12:09:06.798000 audit[3585]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3454 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:06.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663663165366636303534643765643838323765336639663266353230 Jan 19 12:09:06.798000 audit: BPF prog-id=168 op=LOAD Jan 19 12:09:06.798000 audit[3585]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3454 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:06.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663663165366636303534643765643838323765336639663266353230 Jan 19 12:09:06.888998 containerd[1618]: time="2026-01-19T12:09:06.888688711Z" level=info msg="StartContainer for \"6cf1e6f6054d7ed8827e3f9f2f5209b4110725cb4b513634009b01845763ac6c\" returns successfully" Jan 19 12:09:06.930706 systemd[1]: cri-containerd-6cf1e6f6054d7ed8827e3f9f2f5209b4110725cb4b513634009b01845763ac6c.scope: Deactivated successfully. Jan 19 12:09:06.934000 audit: BPF prog-id=168 op=UNLOAD Jan 19 12:09:06.952510 kubelet[2862]: E0119 12:09:06.952029 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:09:06.962925 containerd[1618]: time="2026-01-19T12:09:06.962789890Z" level=info msg="received container exit event container_id:\"6cf1e6f6054d7ed8827e3f9f2f5209b4110725cb4b513634009b01845763ac6c\" id:\"6cf1e6f6054d7ed8827e3f9f2f5209b4110725cb4b513634009b01845763ac6c\" pid:3598 exited_at:{seconds:1768824546 nanos:952987539}" Jan 19 12:09:06.976567 kubelet[2862]: E0119 12:09:06.973803 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:09:06.996717 kubelet[2862]: I0119 12:09:06.995028 2862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6dc949b746-5ghcj" podStartSLOduration=4.053168428 podStartE2EDuration="8.995010453s" podCreationTimestamp="2026-01-19 12:08:58 +0000 UTC" firstStartedPulling="2026-01-19 12:08:59.918775178 +0000 UTC m=+33.169307930" lastFinishedPulling="2026-01-19 12:09:04.860617204 +0000 UTC m=+38.111149955" observedRunningTime="2026-01-19 12:09:05.986968417 +0000 UTC m=+39.237501170" watchObservedRunningTime="2026-01-19 12:09:06.995010453 +0000 UTC m=+40.245543205" Jan 19 12:09:07.115000 audit[3636]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3636 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:09:07.115000 audit[3636]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe48b45ba0 a2=0 a3=7ffe48b45b8c items=0 ppid=3029 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:07.115000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:09:07.122000 audit[3636]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3636 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:09:07.122000 audit[3636]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe48b45ba0 a2=0 a3=7ffe48b45b8c items=0 ppid=3029 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:07.122000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:09:07.156969 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6cf1e6f6054d7ed8827e3f9f2f5209b4110725cb4b513634009b01845763ac6c-rootfs.mount: Deactivated successfully. Jan 19 12:09:07.428494 kubelet[2862]: E0119 12:09:07.427722 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sh4c8" podUID="fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5" Jan 19 12:09:07.973696 kubelet[2862]: E0119 12:09:07.973664 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:09:07.978590 kubelet[2862]: E0119 12:09:07.974428 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:09:07.983907 containerd[1618]: time="2026-01-19T12:09:07.982676345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 19 12:09:08.994501 kubelet[2862]: E0119 12:09:08.993798 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:09:09.430891 kubelet[2862]: E0119 12:09:09.428870 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sh4c8" podUID="fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5" Jan 19 12:09:11.432611 kubelet[2862]: E0119 12:09:11.429718 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sh4c8" podUID="fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5" Jan 19 12:09:13.444986 kubelet[2862]: E0119 12:09:13.443054 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sh4c8" podUID="fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5" Jan 19 12:09:15.426607 kubelet[2862]: E0119 12:09:15.424770 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sh4c8" podUID="fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5" Jan 19 12:09:17.350663 containerd[1618]: time="2026-01-19T12:09:17.350002690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:09:17.359555 containerd[1618]: time="2026-01-19T12:09:17.356929684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 19 12:09:17.361666 containerd[1618]: time="2026-01-19T12:09:17.361640587Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:09:17.367936 containerd[1618]: time="2026-01-19T12:09:17.367912326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:09:17.369906 containerd[1618]: time="2026-01-19T12:09:17.369875031Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 9.387020463s" Jan 19 12:09:17.369990 containerd[1618]: time="2026-01-19T12:09:17.369974707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 19 12:09:17.394892 containerd[1618]: time="2026-01-19T12:09:17.394003627Z" level=info msg="CreateContainer within sandbox \"1f718ce4f34d86d9bcbcb0e24eee4742ac8802cdae241b83bd778942893afca4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 19 12:09:17.433636 containerd[1618]: time="2026-01-19T12:09:17.432798940Z" level=info msg="Container be27dee3b132f9aeb04ad11db3b37d23b8c63c4ad34aa0034d1463c933e78f92: CDI devices from CRI Config.CDIDevices: []" Jan 19 12:09:17.442676 kubelet[2862]: E0119 12:09:17.442634 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sh4c8" podUID="fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5" Jan 19 12:09:17.479631 containerd[1618]: time="2026-01-19T12:09:17.479009161Z" level=info msg="CreateContainer within sandbox \"1f718ce4f34d86d9bcbcb0e24eee4742ac8802cdae241b83bd778942893afca4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"be27dee3b132f9aeb04ad11db3b37d23b8c63c4ad34aa0034d1463c933e78f92\"" Jan 19 12:09:17.482026 containerd[1618]: time="2026-01-19T12:09:17.482005380Z" level=info msg="StartContainer for \"be27dee3b132f9aeb04ad11db3b37d23b8c63c4ad34aa0034d1463c933e78f92\"" Jan 19 12:09:17.486928 containerd[1618]: time="2026-01-19T12:09:17.486905345Z" level=info msg="connecting to shim be27dee3b132f9aeb04ad11db3b37d23b8c63c4ad34aa0034d1463c933e78f92" address="unix:///run/containerd/s/3447015fc15e0140b51f1e925277d7c818a57ec9fc8391dff98ab4ec53c751a5" protocol=ttrpc version=3 Jan 19 12:09:17.578842 systemd[1]: Started cri-containerd-be27dee3b132f9aeb04ad11db3b37d23b8c63c4ad34aa0034d1463c933e78f92.scope - libcontainer container be27dee3b132f9aeb04ad11db3b37d23b8c63c4ad34aa0034d1463c933e78f92. Jan 19 12:09:17.764624 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 19 12:09:17.764735 kernel: audit: type=1334 audit(1768824557.750:566): prog-id=169 op=LOAD Jan 19 12:09:17.750000 audit: BPF prog-id=169 op=LOAD Jan 19 12:09:17.750000 audit[3646]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3454 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:17.828672 kernel: audit: type=1300 audit(1768824557.750:566): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3454 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:17.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265323764656533623133326639616562303461643131646233623337 Jan 19 12:09:17.750000 audit: BPF prog-id=170 op=LOAD Jan 19 12:09:17.893967 kernel: audit: type=1327 audit(1768824557.750:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265323764656533623133326639616562303461643131646233623337 Jan 19 12:09:17.894030 kernel: audit: type=1334 audit(1768824557.750:567): prog-id=170 op=LOAD Jan 19 12:09:17.750000 audit[3646]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3454 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:17.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265323764656533623133326639616562303461643131646233623337 Jan 19 12:09:17.996982 kernel: audit: type=1300 audit(1768824557.750:567): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3454 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:17.997076 kernel: audit: type=1327 audit(1768824557.750:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265323764656533623133326639616562303461643131646233623337 Jan 19 12:09:17.998063 kernel: audit: type=1334 audit(1768824557.750:568): prog-id=170 op=UNLOAD Jan 19 12:09:17.750000 audit: BPF prog-id=170 op=UNLOAD Jan 19 12:09:17.750000 audit[3646]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3454 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:18.064687 kernel: audit: type=1300 audit(1768824557.750:568): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3454 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:18.115886 kernel: audit: type=1327 audit(1768824557.750:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265323764656533623133326639616562303461643131646233623337 Jan 19 12:09:17.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265323764656533623133326639616562303461643131646233623337 Jan 19 12:09:18.116027 containerd[1618]: time="2026-01-19T12:09:18.083742637Z" level=info msg="StartContainer for \"be27dee3b132f9aeb04ad11db3b37d23b8c63c4ad34aa0034d1463c933e78f92\" returns successfully" Jan 19 12:09:17.750000 audit: BPF prog-id=169 op=UNLOAD Jan 19 12:09:17.750000 audit[3646]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3454 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:17.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265323764656533623133326639616562303461643131646233623337 Jan 19 12:09:17.750000 audit: BPF prog-id=171 op=LOAD Jan 19 12:09:17.750000 audit[3646]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3454 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:17.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265323764656533623133326639616562303461643131646233623337 Jan 19 12:09:18.130588 kernel: audit: type=1334 audit(1768824557.750:569): prog-id=169 op=UNLOAD Jan 19 12:09:19.093769 kubelet[2862]: E0119 12:09:19.093732 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:09:19.442676 kubelet[2862]: E0119 12:09:19.441831 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sh4c8" podUID="fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5" Jan 19 12:09:20.224025 systemd[1]: cri-containerd-be27dee3b132f9aeb04ad11db3b37d23b8c63c4ad34aa0034d1463c933e78f92.scope: Deactivated successfully. Jan 19 12:09:20.225049 systemd[1]: cri-containerd-be27dee3b132f9aeb04ad11db3b37d23b8c63c4ad34aa0034d1463c933e78f92.scope: Consumed 2.327s CPU time, 182.4M memory peak, 4.2M read from disk, 171.3M written to disk. Jan 19 12:09:20.235000 audit: BPF prog-id=171 op=UNLOAD Jan 19 12:09:20.249010 containerd[1618]: time="2026-01-19T12:09:20.247888474Z" level=info msg="received container exit event container_id:\"be27dee3b132f9aeb04ad11db3b37d23b8c63c4ad34aa0034d1463c933e78f92\" id:\"be27dee3b132f9aeb04ad11db3b37d23b8c63c4ad34aa0034d1463c933e78f92\" pid:3659 exited_at:{seconds:1768824560 nanos:244895795}" Jan 19 12:09:20.443764 kubelet[2862]: I0119 12:09:20.443737 2862 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 19 12:09:20.549662 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be27dee3b132f9aeb04ad11db3b37d23b8c63c4ad34aa0034d1463c933e78f92-rootfs.mount: Deactivated successfully. Jan 19 12:09:20.682071 systemd[1]: Created slice kubepods-burstable-poda49f9c26_4d74_485a_bf88_290c9f9a5235.slice - libcontainer container kubepods-burstable-poda49f9c26_4d74_485a_bf88_290c9f9a5235.slice. Jan 19 12:09:20.778894 kubelet[2862]: I0119 12:09:20.777949 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7vnr\" (UniqueName: \"kubernetes.io/projected/81e1841f-312c-44d7-b340-4b8d02b8d37b-kube-api-access-l7vnr\") pod \"coredns-66bc5c9577-5scwx\" (UID: \"81e1841f-312c-44d7-b340-4b8d02b8d37b\") " pod="kube-system/coredns-66bc5c9577-5scwx" Jan 19 12:09:20.778894 kubelet[2862]: I0119 12:09:20.777982 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c862d411-4d4f-4a97-b967-49e1eb15851d-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-9qp6h\" (UID: \"c862d411-4d4f-4a97-b967-49e1eb15851d\") " pod="calico-system/goldmane-7c778bb748-9qp6h" Jan 19 12:09:20.778894 kubelet[2862]: I0119 12:09:20.777998 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9ppv\" (UniqueName: \"kubernetes.io/projected/a49f9c26-4d74-485a-bf88-290c9f9a5235-kube-api-access-t9ppv\") pod \"coredns-66bc5c9577-qcffj\" (UID: \"a49f9c26-4d74-485a-bf88-290c9f9a5235\") " pod="kube-system/coredns-66bc5c9577-qcffj" Jan 19 12:09:20.778894 kubelet[2862]: I0119 12:09:20.778010 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81e1841f-312c-44d7-b340-4b8d02b8d37b-config-volume\") pod \"coredns-66bc5c9577-5scwx\" (UID: \"81e1841f-312c-44d7-b340-4b8d02b8d37b\") " pod="kube-system/coredns-66bc5c9577-5scwx" Jan 19 12:09:20.778894 kubelet[2862]: I0119 12:09:20.778024 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c862d411-4d4f-4a97-b967-49e1eb15851d-goldmane-key-pair\") pod \"goldmane-7c778bb748-9qp6h\" (UID: \"c862d411-4d4f-4a97-b967-49e1eb15851d\") " pod="calico-system/goldmane-7c778bb748-9qp6h" Jan 19 12:09:20.779791 kubelet[2862]: I0119 12:09:20.778038 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p54nz\" (UniqueName: \"kubernetes.io/projected/57042a5e-1534-485c-abeb-75f1e57f8cf0-kube-api-access-p54nz\") pod \"calico-apiserver-595df97b5c-7r9mw\" (UID: \"57042a5e-1534-485c-abeb-75f1e57f8cf0\") " pod="calico-apiserver/calico-apiserver-595df97b5c-7r9mw" Jan 19 12:09:20.779791 kubelet[2862]: I0119 12:09:20.778053 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75367eb7-d5a3-4610-be00-cbd5e7d7db9d-tigera-ca-bundle\") pod \"calico-kube-controllers-975cd56bc-wkczn\" (UID: \"75367eb7-d5a3-4610-be00-cbd5e7d7db9d\") " pod="calico-system/calico-kube-controllers-975cd56bc-wkczn" Jan 19 12:09:20.779791 kubelet[2862]: I0119 12:09:20.778068 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6b9860e0-5e66-444d-bc4f-e74b59e19721-calico-apiserver-certs\") pod \"calico-apiserver-595df97b5c-6lb2q\" (UID: \"6b9860e0-5e66-444d-bc4f-e74b59e19721\") " pod="calico-apiserver/calico-apiserver-595df97b5c-6lb2q" Jan 19 12:09:20.779791 kubelet[2862]: I0119 12:09:20.778085 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a49f9c26-4d74-485a-bf88-290c9f9a5235-config-volume\") pod \"coredns-66bc5c9577-qcffj\" (UID: \"a49f9c26-4d74-485a-bf88-290c9f9a5235\") " pod="kube-system/coredns-66bc5c9577-qcffj" Jan 19 12:09:20.779791 kubelet[2862]: I0119 12:09:20.778726 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c862d411-4d4f-4a97-b967-49e1eb15851d-config\") pod \"goldmane-7c778bb748-9qp6h\" (UID: \"c862d411-4d4f-4a97-b967-49e1eb15851d\") " pod="calico-system/goldmane-7c778bb748-9qp6h" Jan 19 12:09:20.779905 kubelet[2862]: I0119 12:09:20.778742 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnvck\" (UniqueName: \"kubernetes.io/projected/c862d411-4d4f-4a97-b967-49e1eb15851d-kube-api-access-tnvck\") pod \"goldmane-7c778bb748-9qp6h\" (UID: \"c862d411-4d4f-4a97-b967-49e1eb15851d\") " pod="calico-system/goldmane-7c778bb748-9qp6h" Jan 19 12:09:20.779905 kubelet[2862]: I0119 12:09:20.778759 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gljlw\" (UniqueName: \"kubernetes.io/projected/6b9860e0-5e66-444d-bc4f-e74b59e19721-kube-api-access-gljlw\") pod \"calico-apiserver-595df97b5c-6lb2q\" (UID: \"6b9860e0-5e66-444d-bc4f-e74b59e19721\") " pod="calico-apiserver/calico-apiserver-595df97b5c-6lb2q" Jan 19 12:09:20.779905 kubelet[2862]: I0119 12:09:20.778774 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/57042a5e-1534-485c-abeb-75f1e57f8cf0-calico-apiserver-certs\") pod \"calico-apiserver-595df97b5c-7r9mw\" (UID: \"57042a5e-1534-485c-abeb-75f1e57f8cf0\") " pod="calico-apiserver/calico-apiserver-595df97b5c-7r9mw" Jan 19 12:09:20.779905 kubelet[2862]: I0119 12:09:20.778788 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs4hc\" (UniqueName: \"kubernetes.io/projected/75367eb7-d5a3-4610-be00-cbd5e7d7db9d-kube-api-access-qs4hc\") pod \"calico-kube-controllers-975cd56bc-wkczn\" (UID: \"75367eb7-d5a3-4610-be00-cbd5e7d7db9d\") " pod="calico-system/calico-kube-controllers-975cd56bc-wkczn" Jan 19 12:09:20.804906 systemd[1]: Created slice kubepods-besteffort-pod57042a5e_1534_485c_abeb_75f1e57f8cf0.slice - libcontainer container kubepods-besteffort-pod57042a5e_1534_485c_abeb_75f1e57f8cf0.slice. Jan 19 12:09:20.822887 systemd[1]: Created slice kubepods-burstable-pod81e1841f_312c_44d7_b340_4b8d02b8d37b.slice - libcontainer container kubepods-burstable-pod81e1841f_312c_44d7_b340_4b8d02b8d37b.slice. Jan 19 12:09:20.866940 systemd[1]: Created slice kubepods-besteffort-podc862d411_4d4f_4a97_b967_49e1eb15851d.slice - libcontainer container kubepods-besteffort-podc862d411_4d4f_4a97_b967_49e1eb15851d.slice. Jan 19 12:09:20.882773 kubelet[2862]: I0119 12:09:20.881861 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1474234e-7956-446d-ab4f-5e4881b3e006-whisker-backend-key-pair\") pod \"whisker-6498f87f67-blq74\" (UID: \"1474234e-7956-446d-ab4f-5e4881b3e006\") " pod="calico-system/whisker-6498f87f67-blq74" Jan 19 12:09:20.882773 kubelet[2862]: I0119 12:09:20.881950 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7qrs\" (UniqueName: \"kubernetes.io/projected/1474234e-7956-446d-ab4f-5e4881b3e006-kube-api-access-t7qrs\") pod \"whisker-6498f87f67-blq74\" (UID: \"1474234e-7956-446d-ab4f-5e4881b3e006\") " pod="calico-system/whisker-6498f87f67-blq74" Jan 19 12:09:20.882773 kubelet[2862]: I0119 12:09:20.882070 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1474234e-7956-446d-ab4f-5e4881b3e006-whisker-ca-bundle\") pod \"whisker-6498f87f67-blq74\" (UID: \"1474234e-7956-446d-ab4f-5e4881b3e006\") " pod="calico-system/whisker-6498f87f67-blq74" Jan 19 12:09:20.882793 systemd[1]: Created slice kubepods-besteffort-pod6b9860e0_5e66_444d_bc4f_e74b59e19721.slice - libcontainer container kubepods-besteffort-pod6b9860e0_5e66_444d_bc4f_e74b59e19721.slice. Jan 19 12:09:21.003872 systemd[1]: Created slice kubepods-besteffort-pod75367eb7_d5a3_4610_be00_cbd5e7d7db9d.slice - libcontainer container kubepods-besteffort-pod75367eb7_d5a3_4610_be00_cbd5e7d7db9d.slice. Jan 19 12:09:21.040841 systemd[1]: Created slice kubepods-besteffort-pod1474234e_7956_446d_ab4f_5e4881b3e006.slice - libcontainer container kubepods-besteffort-pod1474234e_7956_446d_ab4f_5e4881b3e006.slice. Jan 19 12:09:21.151734 containerd[1618]: time="2026-01-19T12:09:21.148869250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595df97b5c-7r9mw,Uid:57042a5e-1534-485c-abeb-75f1e57f8cf0,Namespace:calico-apiserver,Attempt:0,}" Jan 19 12:09:21.172740 kubelet[2862]: E0119 12:09:21.172716 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:09:21.181661 kubelet[2862]: E0119 12:09:21.179066 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:09:21.181905 containerd[1618]: time="2026-01-19T12:09:21.181878031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5scwx,Uid:81e1841f-312c-44d7-b340-4b8d02b8d37b,Namespace:kube-system,Attempt:0,}" Jan 19 12:09:21.199743 containerd[1618]: time="2026-01-19T12:09:21.197866335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 19 12:09:21.201800 containerd[1618]: time="2026-01-19T12:09:21.201622037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-9qp6h,Uid:c862d411-4d4f-4a97-b967-49e1eb15851d,Namespace:calico-system,Attempt:0,}" Jan 19 12:09:21.280632 containerd[1618]: time="2026-01-19T12:09:21.280055097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595df97b5c-6lb2q,Uid:6b9860e0-5e66-444d-bc4f-e74b59e19721,Namespace:calico-apiserver,Attempt:0,}" Jan 19 12:09:21.335619 kubelet[2862]: E0119 12:09:21.330064 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:09:21.371892 containerd[1618]: time="2026-01-19T12:09:21.371846209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qcffj,Uid:a49f9c26-4d74-485a-bf88-290c9f9a5235,Namespace:kube-system,Attempt:0,}" Jan 19 12:09:21.418023 containerd[1618]: time="2026-01-19T12:09:21.416928512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6498f87f67-blq74,Uid:1474234e-7956-446d-ab4f-5e4881b3e006,Namespace:calico-system,Attempt:0,}" Jan 19 12:09:21.430741 containerd[1618]: time="2026-01-19T12:09:21.430715873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-975cd56bc-wkczn,Uid:75367eb7-d5a3-4610-be00-cbd5e7d7db9d,Namespace:calico-system,Attempt:0,}" Jan 19 12:09:21.447939 systemd[1]: Created slice kubepods-besteffort-podfe7ed3bd_4172_4537_91b9_e5f33dbfd6b5.slice - libcontainer container kubepods-besteffort-podfe7ed3bd_4172_4537_91b9_e5f33dbfd6b5.slice. Jan 19 12:09:21.507830 containerd[1618]: time="2026-01-19T12:09:21.507794088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sh4c8,Uid:fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5,Namespace:calico-system,Attempt:0,}" Jan 19 12:09:22.872682 containerd[1618]: time="2026-01-19T12:09:22.872632208Z" level=error msg="Failed to destroy network for sandbox \"f36f187c6ae99c32f7b2ae7949ab624072e715d8b4377a9320a730ce6e99d48c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:22.890974 systemd[1]: run-netns-cni\x2dcc1b7fd7\x2df8e8\x2df1eb\x2ddd09\x2d279f551255db.mount: Deactivated successfully. Jan 19 12:09:22.918704 containerd[1618]: time="2026-01-19T12:09:22.917045482Z" level=error msg="Failed to destroy network for sandbox \"f67543dfac23ad0a0bfcd951963570c4347699532526af3133999959d01f6440\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:22.931771 systemd[1]: run-netns-cni\x2d502e39db\x2d45d8\x2dfe45\x2d62a5\x2d18a62754159c.mount: Deactivated successfully. Jan 19 12:09:22.957804 containerd[1618]: time="2026-01-19T12:09:22.957760114Z" level=error msg="Failed to destroy network for sandbox \"796c2f2b6d2ddfbfb435e7eaf843b6f346a69e39fea0d209a6a8c21a02319f2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:22.966843 systemd[1]: run-netns-cni\x2d17f16f80\x2ddda2\x2de5b2\x2d6045\x2dae03be898923.mount: Deactivated successfully. Jan 19 12:09:23.005777 containerd[1618]: time="2026-01-19T12:09:22.998572980Z" level=error msg="Failed to destroy network for sandbox \"e738c0b5b8f91e0d5af274efac0a6a56aca1da0e89651ea698d7c7b9c6f27210\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:23.008945 systemd[1]: run-netns-cni\x2d5cc7308a\x2d9b7d\x2deb7f\x2d2863\x2d85f0f0c8aaff.mount: Deactivated successfully. Jan 19 12:09:23.070859 containerd[1618]: time="2026-01-19T12:09:23.070728906Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5scwx,Uid:81e1841f-312c-44d7-b340-4b8d02b8d37b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"796c2f2b6d2ddfbfb435e7eaf843b6f346a69e39fea0d209a6a8c21a02319f2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:23.075889 containerd[1618]: time="2026-01-19T12:09:23.075853019Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sh4c8,Uid:fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f67543dfac23ad0a0bfcd951963570c4347699532526af3133999959d01f6440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:23.082771 kubelet[2862]: E0119 12:09:23.080701 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"796c2f2b6d2ddfbfb435e7eaf843b6f346a69e39fea0d209a6a8c21a02319f2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:23.082771 kubelet[2862]: E0119 12:09:23.081683 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"796c2f2b6d2ddfbfb435e7eaf843b6f346a69e39fea0d209a6a8c21a02319f2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5scwx" Jan 19 12:09:23.082771 kubelet[2862]: E0119 12:09:23.081703 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"796c2f2b6d2ddfbfb435e7eaf843b6f346a69e39fea0d209a6a8c21a02319f2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5scwx" Jan 19 12:09:23.087776 kubelet[2862]: E0119 12:09:23.081742 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-5scwx_kube-system(81e1841f-312c-44d7-b340-4b8d02b8d37b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-5scwx_kube-system(81e1841f-312c-44d7-b340-4b8d02b8d37b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"796c2f2b6d2ddfbfb435e7eaf843b6f346a69e39fea0d209a6a8c21a02319f2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-5scwx" podUID="81e1841f-312c-44d7-b340-4b8d02b8d37b" Jan 19 12:09:23.096956 containerd[1618]: time="2026-01-19T12:09:23.096910763Z" level=error msg="Failed to destroy network for sandbox \"7053d5a54ea6124b841f6585dfdb6f0d13a4b2df0fb2668b2b8b2e6f35091058\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:23.111889 containerd[1618]: time="2026-01-19T12:09:23.098881486Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595df97b5c-7r9mw,Uid:57042a5e-1534-485c-abeb-75f1e57f8cf0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f36f187c6ae99c32f7b2ae7949ab624072e715d8b4377a9320a730ce6e99d48c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:23.111889 containerd[1618]: time="2026-01-19T12:09:23.106987821Z" level=error msg="Failed to destroy network for sandbox \"71c29488893c2b12c24c3dc185587ca73fbc8bc298d0dbbc1532173f8bd0f465\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:23.108742 systemd[1]: run-netns-cni\x2d91259237\x2d7dbf\x2d5f25\x2d8fe2\x2d2719d3fac2f1.mount: Deactivated successfully. Jan 19 12:09:23.112640 kubelet[2862]: E0119 12:09:23.103807 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f36f187c6ae99c32f7b2ae7949ab624072e715d8b4377a9320a730ce6e99d48c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:23.112640 kubelet[2862]: E0119 12:09:23.103870 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f36f187c6ae99c32f7b2ae7949ab624072e715d8b4377a9320a730ce6e99d48c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-595df97b5c-7r9mw" Jan 19 12:09:23.112640 kubelet[2862]: E0119 12:09:23.103891 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f36f187c6ae99c32f7b2ae7949ab624072e715d8b4377a9320a730ce6e99d48c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-595df97b5c-7r9mw" Jan 19 12:09:23.112716 kubelet[2862]: E0119 12:09:23.103943 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-595df97b5c-7r9mw_calico-apiserver(57042a5e-1534-485c-abeb-75f1e57f8cf0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-595df97b5c-7r9mw_calico-apiserver(57042a5e-1534-485c-abeb-75f1e57f8cf0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f36f187c6ae99c32f7b2ae7949ab624072e715d8b4377a9320a730ce6e99d48c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-595df97b5c-7r9mw" podUID="57042a5e-1534-485c-abeb-75f1e57f8cf0" Jan 19 12:09:23.112716 kubelet[2862]: E0119 12:09:23.103998 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f67543dfac23ad0a0bfcd951963570c4347699532526af3133999959d01f6440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:23.112716 kubelet[2862]: E0119 12:09:23.104016 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f67543dfac23ad0a0bfcd951963570c4347699532526af3133999959d01f6440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sh4c8" Jan 19 12:09:23.114791 kubelet[2862]: E0119 12:09:23.104030 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f67543dfac23ad0a0bfcd951963570c4347699532526af3133999959d01f6440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sh4c8" Jan 19 12:09:23.114791 kubelet[2862]: E0119 12:09:23.104054 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sh4c8_calico-system(fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sh4c8_calico-system(fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f67543dfac23ad0a0bfcd951963570c4347699532526af3133999959d01f6440\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sh4c8" podUID="fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5" Jan 19 12:09:23.126827 containerd[1618]: time="2026-01-19T12:09:23.126030645Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595df97b5c-6lb2q,Uid:6b9860e0-5e66-444d-bc4f-e74b59e19721,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e738c0b5b8f91e0d5af274efac0a6a56aca1da0e89651ea698d7c7b9c6f27210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:23.140883 kubelet[2862]: E0119 12:09:23.140832 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e738c0b5b8f91e0d5af274efac0a6a56aca1da0e89651ea698d7c7b9c6f27210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:23.141670 kubelet[2862]: E0119 12:09:23.141646 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e738c0b5b8f91e0d5af274efac0a6a56aca1da0e89651ea698d7c7b9c6f27210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-595df97b5c-6lb2q" Jan 19 12:09:23.141803 kubelet[2862]: E0119 12:09:23.141786 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e738c0b5b8f91e0d5af274efac0a6a56aca1da0e89651ea698d7c7b9c6f27210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-595df97b5c-6lb2q" Jan 19 12:09:23.151751 kubelet[2862]: E0119 12:09:23.151720 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-595df97b5c-6lb2q_calico-apiserver(6b9860e0-5e66-444d-bc4f-e74b59e19721)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-595df97b5c-6lb2q_calico-apiserver(6b9860e0-5e66-444d-bc4f-e74b59e19721)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e738c0b5b8f91e0d5af274efac0a6a56aca1da0e89651ea698d7c7b9c6f27210\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-595df97b5c-6lb2q" podUID="6b9860e0-5e66-444d-bc4f-e74b59e19721" Jan 19 12:09:23.156957 containerd[1618]: time="2026-01-19T12:09:23.155843723Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qcffj,Uid:a49f9c26-4d74-485a-bf88-290c9f9a5235,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"71c29488893c2b12c24c3dc185587ca73fbc8bc298d0dbbc1532173f8bd0f465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:23.157800 kubelet[2862]: E0119 12:09:23.156888 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71c29488893c2b12c24c3dc185587ca73fbc8bc298d0dbbc1532173f8bd0f465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:23.157800 kubelet[2862]: E0119 12:09:23.156936 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71c29488893c2b12c24c3dc185587ca73fbc8bc298d0dbbc1532173f8bd0f465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qcffj" Jan 19 12:09:23.157800 kubelet[2862]: E0119 12:09:23.156954 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71c29488893c2b12c24c3dc185587ca73fbc8bc298d0dbbc1532173f8bd0f465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qcffj" Jan 19 12:09:23.158708 kubelet[2862]: E0119 12:09:23.156995 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-qcffj_kube-system(a49f9c26-4d74-485a-bf88-290c9f9a5235)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-qcffj_kube-system(a49f9c26-4d74-485a-bf88-290c9f9a5235)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71c29488893c2b12c24c3dc185587ca73fbc8bc298d0dbbc1532173f8bd0f465\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-qcffj" podUID="a49f9c26-4d74-485a-bf88-290c9f9a5235" Jan 19 12:09:23.180604 containerd[1618]: time="2026-01-19T12:09:23.179846846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-9qp6h,Uid:c862d411-4d4f-4a97-b967-49e1eb15851d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7053d5a54ea6124b841f6585dfdb6f0d13a4b2df0fb2668b2b8b2e6f35091058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:23.183736 kubelet[2862]: E0119 12:09:23.182939 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7053d5a54ea6124b841f6585dfdb6f0d13a4b2df0fb2668b2b8b2e6f35091058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:23.184594 kubelet[2862]: E0119 12:09:23.183901 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7053d5a54ea6124b841f6585dfdb6f0d13a4b2df0fb2668b2b8b2e6f35091058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-9qp6h" Jan 19 12:09:23.184594 kubelet[2862]: E0119 12:09:23.184077 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7053d5a54ea6124b841f6585dfdb6f0d13a4b2df0fb2668b2b8b2e6f35091058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-9qp6h" Jan 19 12:09:23.188731 kubelet[2862]: E0119 12:09:23.187877 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-9qp6h_calico-system(c862d411-4d4f-4a97-b967-49e1eb15851d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-9qp6h_calico-system(c862d411-4d4f-4a97-b967-49e1eb15851d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7053d5a54ea6124b841f6585dfdb6f0d13a4b2df0fb2668b2b8b2e6f35091058\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-9qp6h" podUID="c862d411-4d4f-4a97-b967-49e1eb15851d" Jan 19 12:09:23.434584 containerd[1618]: time="2026-01-19T12:09:23.426970714Z" level=error msg="Failed to destroy network for sandbox \"ae6e31b7db0544c6bfab66de858f7bf109177925ffe3841b38020e7502bc6640\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:23.464970 containerd[1618]: time="2026-01-19T12:09:23.462844761Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6498f87f67-blq74,Uid:1474234e-7956-446d-ab4f-5e4881b3e006,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae6e31b7db0544c6bfab66de858f7bf109177925ffe3841b38020e7502bc6640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:23.467838 containerd[1618]: time="2026-01-19T12:09:23.467746090Z" level=error msg="Failed to destroy network for sandbox \"ef7d117fefe56a6a422d891d2ef0a673557acca162ed018f246fa60237dcd524\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:23.471843 kubelet[2862]: E0119 12:09:23.469747 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae6e31b7db0544c6bfab66de858f7bf109177925ffe3841b38020e7502bc6640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:23.471843 kubelet[2862]: E0119 12:09:23.469802 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae6e31b7db0544c6bfab66de858f7bf109177925ffe3841b38020e7502bc6640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6498f87f67-blq74" Jan 19 12:09:23.471843 kubelet[2862]: E0119 12:09:23.469820 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae6e31b7db0544c6bfab66de858f7bf109177925ffe3841b38020e7502bc6640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6498f87f67-blq74" Jan 19 12:09:23.471963 kubelet[2862]: E0119 12:09:23.469862 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6498f87f67-blq74_calico-system(1474234e-7956-446d-ab4f-5e4881b3e006)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6498f87f67-blq74_calico-system(1474234e-7956-446d-ab4f-5e4881b3e006)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae6e31b7db0544c6bfab66de858f7bf109177925ffe3841b38020e7502bc6640\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6498f87f67-blq74" podUID="1474234e-7956-446d-ab4f-5e4881b3e006" Jan 19 12:09:23.486687 containerd[1618]: time="2026-01-19T12:09:23.485905144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-975cd56bc-wkczn,Uid:75367eb7-d5a3-4610-be00-cbd5e7d7db9d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef7d117fefe56a6a422d891d2ef0a673557acca162ed018f246fa60237dcd524\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:23.487618 kubelet[2862]: E0119 12:09:23.486898 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef7d117fefe56a6a422d891d2ef0a673557acca162ed018f246fa60237dcd524\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:23.487618 kubelet[2862]: E0119 12:09:23.486939 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef7d117fefe56a6a422d891d2ef0a673557acca162ed018f246fa60237dcd524\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-975cd56bc-wkczn" Jan 19 12:09:23.487618 kubelet[2862]: E0119 12:09:23.486964 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef7d117fefe56a6a422d891d2ef0a673557acca162ed018f246fa60237dcd524\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-975cd56bc-wkczn" Jan 19 12:09:23.487724 kubelet[2862]: E0119 12:09:23.487020 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-975cd56bc-wkczn_calico-system(75367eb7-d5a3-4610-be00-cbd5e7d7db9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-975cd56bc-wkczn_calico-system(75367eb7-d5a3-4610-be00-cbd5e7d7db9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef7d117fefe56a6a422d891d2ef0a673557acca162ed018f246fa60237dcd524\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-975cd56bc-wkczn" podUID="75367eb7-d5a3-4610-be00-cbd5e7d7db9d" Jan 19 12:09:23.892880 systemd[1]: run-netns-cni\x2dca4e9c09\x2d9553\x2d7890\x2dabfe\x2db174742066d9.mount: Deactivated successfully. Jan 19 12:09:23.893768 systemd[1]: run-netns-cni\x2d2f9eb16e\x2d4841\x2d71e7\x2db28f\x2d42dfac03fa75.mount: Deactivated successfully. Jan 19 12:09:23.893841 systemd[1]: run-netns-cni\x2d5791e175\x2d4f95\x2d2d6e\x2d69e6\x2d5702fb426200.mount: Deactivated successfully. Jan 19 12:09:33.442617 containerd[1618]: time="2026-01-19T12:09:33.442026843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-9qp6h,Uid:c862d411-4d4f-4a97-b967-49e1eb15851d,Namespace:calico-system,Attempt:0,}" Jan 19 12:09:33.917062 containerd[1618]: time="2026-01-19T12:09:33.916495822Z" level=error msg="Failed to destroy network for sandbox \"607050eaedef87af71e14022c7a1c779a6a6c69d7aab47261f044cbd823ac1f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:33.923936 systemd[1]: run-netns-cni\x2d60c6ed55\x2d058d\x2dfcfb\x2dd558\x2d30ff0a198ac6.mount: Deactivated successfully. Jan 19 12:09:33.938725 containerd[1618]: time="2026-01-19T12:09:33.938682681Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-9qp6h,Uid:c862d411-4d4f-4a97-b967-49e1eb15851d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"607050eaedef87af71e14022c7a1c779a6a6c69d7aab47261f044cbd823ac1f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:33.943699 kubelet[2862]: E0119 12:09:33.941919 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"607050eaedef87af71e14022c7a1c779a6a6c69d7aab47261f044cbd823ac1f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:33.943699 kubelet[2862]: E0119 12:09:33.941992 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"607050eaedef87af71e14022c7a1c779a6a6c69d7aab47261f044cbd823ac1f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-9qp6h" Jan 19 12:09:33.943699 kubelet[2862]: E0119 12:09:33.942017 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"607050eaedef87af71e14022c7a1c779a6a6c69d7aab47261f044cbd823ac1f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-9qp6h" Jan 19 12:09:33.945811 kubelet[2862]: E0119 12:09:33.942078 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-9qp6h_calico-system(c862d411-4d4f-4a97-b967-49e1eb15851d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-9qp6h_calico-system(c862d411-4d4f-4a97-b967-49e1eb15851d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"607050eaedef87af71e14022c7a1c779a6a6c69d7aab47261f044cbd823ac1f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-9qp6h" podUID="c862d411-4d4f-4a97-b967-49e1eb15851d" Jan 19 12:09:34.432649 kubelet[2862]: E0119 12:09:34.432042 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:09:34.436862 containerd[1618]: time="2026-01-19T12:09:34.436834198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5scwx,Uid:81e1841f-312c-44d7-b340-4b8d02b8d37b,Namespace:kube-system,Attempt:0,}" Jan 19 12:09:34.459528 containerd[1618]: time="2026-01-19T12:09:34.459492947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-975cd56bc-wkczn,Uid:75367eb7-d5a3-4610-be00-cbd5e7d7db9d,Namespace:calico-system,Attempt:0,}" Jan 19 12:09:34.794849 containerd[1618]: time="2026-01-19T12:09:34.792756662Z" level=error msg="Failed to destroy network for sandbox \"1a8e0ad9dae60d4152539dc1f2b8fc7a35a6c63d17fdda23cb84b907d42ab865\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:34.802708 systemd[1]: run-netns-cni\x2d915a19a6\x2dfb80\x2d7540\x2d4526\x2d9a3c76fcb384.mount: Deactivated successfully. Jan 19 12:09:34.823066 containerd[1618]: time="2026-01-19T12:09:34.823038691Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5scwx,Uid:81e1841f-312c-44d7-b340-4b8d02b8d37b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a8e0ad9dae60d4152539dc1f2b8fc7a35a6c63d17fdda23cb84b907d42ab865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:34.830683 kubelet[2862]: E0119 12:09:34.829937 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a8e0ad9dae60d4152539dc1f2b8fc7a35a6c63d17fdda23cb84b907d42ab865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:34.830683 kubelet[2862]: E0119 12:09:34.829989 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a8e0ad9dae60d4152539dc1f2b8fc7a35a6c63d17fdda23cb84b907d42ab865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5scwx" Jan 19 12:09:34.830683 kubelet[2862]: E0119 12:09:34.830005 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a8e0ad9dae60d4152539dc1f2b8fc7a35a6c63d17fdda23cb84b907d42ab865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5scwx" Jan 19 12:09:34.833887 kubelet[2862]: E0119 12:09:34.833861 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-5scwx_kube-system(81e1841f-312c-44d7-b340-4b8d02b8d37b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-5scwx_kube-system(81e1841f-312c-44d7-b340-4b8d02b8d37b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a8e0ad9dae60d4152539dc1f2b8fc7a35a6c63d17fdda23cb84b907d42ab865\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-5scwx" podUID="81e1841f-312c-44d7-b340-4b8d02b8d37b" Jan 19 12:09:34.964631 containerd[1618]: time="2026-01-19T12:09:34.963057882Z" level=error msg="Failed to destroy network for sandbox \"ed9b12a942ce9376c96f8a391c7e8babb57a3830ecd6875f585c2ebc70d376c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:34.971726 systemd[1]: run-netns-cni\x2d8e1e786c\x2d6195\x2d8c5d\x2dbfc0\x2d8473f72d79f2.mount: Deactivated successfully. Jan 19 12:09:34.999013 containerd[1618]: time="2026-01-19T12:09:34.997907933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-975cd56bc-wkczn,Uid:75367eb7-d5a3-4610-be00-cbd5e7d7db9d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed9b12a942ce9376c96f8a391c7e8babb57a3830ecd6875f585c2ebc70d376c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:34.999926 kubelet[2862]: E0119 12:09:34.999709 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed9b12a942ce9376c96f8a391c7e8babb57a3830ecd6875f585c2ebc70d376c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:34.999926 kubelet[2862]: E0119 12:09:34.999763 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed9b12a942ce9376c96f8a391c7e8babb57a3830ecd6875f585c2ebc70d376c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-975cd56bc-wkczn" Jan 19 12:09:34.999926 kubelet[2862]: E0119 12:09:34.999783 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed9b12a942ce9376c96f8a391c7e8babb57a3830ecd6875f585c2ebc70d376c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-975cd56bc-wkczn" Jan 19 12:09:35.000823 kubelet[2862]: E0119 12:09:34.999829 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-975cd56bc-wkczn_calico-system(75367eb7-d5a3-4610-be00-cbd5e7d7db9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-975cd56bc-wkczn_calico-system(75367eb7-d5a3-4610-be00-cbd5e7d7db9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed9b12a942ce9376c96f8a391c7e8babb57a3830ecd6875f585c2ebc70d376c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-975cd56bc-wkczn" podUID="75367eb7-d5a3-4610-be00-cbd5e7d7db9d" Jan 19 12:09:35.445701 kubelet[2862]: E0119 12:09:35.444815 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:09:35.448729 containerd[1618]: time="2026-01-19T12:09:35.448693068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qcffj,Uid:a49f9c26-4d74-485a-bf88-290c9f9a5235,Namespace:kube-system,Attempt:0,}" Jan 19 12:09:35.464048 containerd[1618]: time="2026-01-19T12:09:35.463756523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595df97b5c-6lb2q,Uid:6b9860e0-5e66-444d-bc4f-e74b59e19721,Namespace:calico-apiserver,Attempt:0,}" Jan 19 12:09:36.079957 containerd[1618]: time="2026-01-19T12:09:36.074794520Z" level=error msg="Failed to destroy network for sandbox \"cc9abc60adc73a18f3a9555a770d577aa5deccf43794ce10c1423146fec4aefd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:36.082864 systemd[1]: run-netns-cni\x2d3b8d05c7\x2d7b5e\x2dd636\x2df310\x2d0ab63f373458.mount: Deactivated successfully. Jan 19 12:09:36.098158 containerd[1618]: time="2026-01-19T12:09:36.098016284Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qcffj,Uid:a49f9c26-4d74-485a-bf88-290c9f9a5235,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc9abc60adc73a18f3a9555a770d577aa5deccf43794ce10c1423146fec4aefd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:36.100900 kubelet[2862]: E0119 12:09:36.099808 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc9abc60adc73a18f3a9555a770d577aa5deccf43794ce10c1423146fec4aefd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:36.100900 kubelet[2862]: E0119 12:09:36.100031 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc9abc60adc73a18f3a9555a770d577aa5deccf43794ce10c1423146fec4aefd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qcffj" Jan 19 12:09:36.100900 kubelet[2862]: E0119 12:09:36.100063 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc9abc60adc73a18f3a9555a770d577aa5deccf43794ce10c1423146fec4aefd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qcffj" Jan 19 12:09:36.101824 kubelet[2862]: E0119 12:09:36.100710 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-qcffj_kube-system(a49f9c26-4d74-485a-bf88-290c9f9a5235)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-qcffj_kube-system(a49f9c26-4d74-485a-bf88-290c9f9a5235)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc9abc60adc73a18f3a9555a770d577aa5deccf43794ce10c1423146fec4aefd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-qcffj" podUID="a49f9c26-4d74-485a-bf88-290c9f9a5235" Jan 19 12:09:36.140057 containerd[1618]: time="2026-01-19T12:09:36.139796234Z" level=error msg="Failed to destroy network for sandbox \"7cea7ce573dd1b60a9e58feee259f16973aaa1a55dfe2cff4476d3647eb35983\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:36.144972 systemd[1]: run-netns-cni\x2d212d5cd9\x2dbcad\x2d36c8\x2d5104\x2d4a5cb4c05964.mount: Deactivated successfully. Jan 19 12:09:36.169699 containerd[1618]: time="2026-01-19T12:09:36.168080902Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595df97b5c-6lb2q,Uid:6b9860e0-5e66-444d-bc4f-e74b59e19721,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cea7ce573dd1b60a9e58feee259f16973aaa1a55dfe2cff4476d3647eb35983\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:36.170707 kubelet[2862]: E0119 12:09:36.170048 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cea7ce573dd1b60a9e58feee259f16973aaa1a55dfe2cff4476d3647eb35983\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:36.170707 kubelet[2862]: E0119 12:09:36.170614 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cea7ce573dd1b60a9e58feee259f16973aaa1a55dfe2cff4476d3647eb35983\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-595df97b5c-6lb2q" Jan 19 12:09:36.170707 kubelet[2862]: E0119 12:09:36.170635 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cea7ce573dd1b60a9e58feee259f16973aaa1a55dfe2cff4476d3647eb35983\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-595df97b5c-6lb2q" Jan 19 12:09:36.170805 kubelet[2862]: E0119 12:09:36.170680 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-595df97b5c-6lb2q_calico-apiserver(6b9860e0-5e66-444d-bc4f-e74b59e19721)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-595df97b5c-6lb2q_calico-apiserver(6b9860e0-5e66-444d-bc4f-e74b59e19721)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cea7ce573dd1b60a9e58feee259f16973aaa1a55dfe2cff4476d3647eb35983\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-595df97b5c-6lb2q" podUID="6b9860e0-5e66-444d-bc4f-e74b59e19721" Jan 19 12:09:36.431982 kubelet[2862]: E0119 12:09:36.429847 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:09:36.447816 containerd[1618]: time="2026-01-19T12:09:36.447780107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sh4c8,Uid:fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5,Namespace:calico-system,Attempt:0,}" Jan 19 12:09:36.460409 containerd[1618]: time="2026-01-19T12:09:36.459748860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595df97b5c-7r9mw,Uid:57042a5e-1534-485c-abeb-75f1e57f8cf0,Namespace:calico-apiserver,Attempt:0,}" Jan 19 12:09:36.465820 containerd[1618]: time="2026-01-19T12:09:36.465047248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6498f87f67-blq74,Uid:1474234e-7956-446d-ab4f-5e4881b3e006,Namespace:calico-system,Attempt:0,}" Jan 19 12:09:37.032973 containerd[1618]: time="2026-01-19T12:09:37.030883643Z" level=error msg="Failed to destroy network for sandbox \"2f1389b6b783ed51e58ef8366570b53fee6c4af1f564aa5b7947632334f624dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:37.061799 containerd[1618]: time="2026-01-19T12:09:37.058954842Z" level=error msg="Failed to destroy network for sandbox \"c7567a07abef117433588333a1921d8be37f4512891ca3db0498b5d85f92178c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:37.083842 systemd[1]: run-netns-cni\x2dd37dd973\x2d3700\x2d5082\x2d7d26\x2dc9fd6859b6f4.mount: Deactivated successfully. Jan 19 12:09:37.084670 systemd[1]: run-netns-cni\x2dc26436cd\x2d52cf\x2df141\x2d7482\x2d0d050d3d6255.mount: Deactivated successfully. Jan 19 12:09:37.090743 containerd[1618]: time="2026-01-19T12:09:37.086885116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595df97b5c-7r9mw,Uid:57042a5e-1534-485c-abeb-75f1e57f8cf0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f1389b6b783ed51e58ef8366570b53fee6c4af1f564aa5b7947632334f624dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:37.091795 kubelet[2862]: E0119 12:09:37.088632 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f1389b6b783ed51e58ef8366570b53fee6c4af1f564aa5b7947632334f624dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:37.091795 kubelet[2862]: E0119 12:09:37.088678 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f1389b6b783ed51e58ef8366570b53fee6c4af1f564aa5b7947632334f624dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-595df97b5c-7r9mw" Jan 19 12:09:37.091795 kubelet[2862]: E0119 12:09:37.088697 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f1389b6b783ed51e58ef8366570b53fee6c4af1f564aa5b7947632334f624dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-595df97b5c-7r9mw" Jan 19 12:09:37.092765 kubelet[2862]: E0119 12:09:37.088741 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-595df97b5c-7r9mw_calico-apiserver(57042a5e-1534-485c-abeb-75f1e57f8cf0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-595df97b5c-7r9mw_calico-apiserver(57042a5e-1534-485c-abeb-75f1e57f8cf0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f1389b6b783ed51e58ef8366570b53fee6c4af1f564aa5b7947632334f624dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-595df97b5c-7r9mw" podUID="57042a5e-1534-485c-abeb-75f1e57f8cf0" Jan 19 12:09:37.128883 containerd[1618]: time="2026-01-19T12:09:37.128042251Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6498f87f67-blq74,Uid:1474234e-7956-446d-ab4f-5e4881b3e006,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7567a07abef117433588333a1921d8be37f4512891ca3db0498b5d85f92178c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:37.130721 kubelet[2862]: E0119 12:09:37.129844 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7567a07abef117433588333a1921d8be37f4512891ca3db0498b5d85f92178c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:37.130721 kubelet[2862]: E0119 12:09:37.129903 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7567a07abef117433588333a1921d8be37f4512891ca3db0498b5d85f92178c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6498f87f67-blq74" Jan 19 12:09:37.130721 kubelet[2862]: E0119 12:09:37.129927 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7567a07abef117433588333a1921d8be37f4512891ca3db0498b5d85f92178c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6498f87f67-blq74" Jan 19 12:09:37.133674 kubelet[2862]: E0119 12:09:37.129983 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6498f87f67-blq74_calico-system(1474234e-7956-446d-ab4f-5e4881b3e006)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6498f87f67-blq74_calico-system(1474234e-7956-446d-ab4f-5e4881b3e006)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7567a07abef117433588333a1921d8be37f4512891ca3db0498b5d85f92178c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6498f87f67-blq74" podUID="1474234e-7956-446d-ab4f-5e4881b3e006" Jan 19 12:09:37.341841 containerd[1618]: time="2026-01-19T12:09:37.341803357Z" level=error msg="Failed to destroy network for sandbox \"bc2d0f777a3929f8cd389fb513a697a169b31911bd0db7bbded27d40e2270f0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:37.348955 systemd[1]: run-netns-cni\x2d2c96110d\x2d8bb6\x2d96b3\x2dabb0\x2d90edebc6eaff.mount: Deactivated successfully. Jan 19 12:09:37.370494 containerd[1618]: time="2026-01-19T12:09:37.369700245Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sh4c8,Uid:fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc2d0f777a3929f8cd389fb513a697a169b31911bd0db7bbded27d40e2270f0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:37.371820 kubelet[2862]: E0119 12:09:37.371775 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc2d0f777a3929f8cd389fb513a697a169b31911bd0db7bbded27d40e2270f0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:37.371883 kubelet[2862]: E0119 12:09:37.371829 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc2d0f777a3929f8cd389fb513a697a169b31911bd0db7bbded27d40e2270f0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sh4c8" Jan 19 12:09:37.371883 kubelet[2862]: E0119 12:09:37.371869 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc2d0f777a3929f8cd389fb513a697a169b31911bd0db7bbded27d40e2270f0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sh4c8" Jan 19 12:09:37.371936 kubelet[2862]: E0119 12:09:37.371909 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sh4c8_calico-system(fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sh4c8_calico-system(fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc2d0f777a3929f8cd389fb513a697a169b31911bd0db7bbded27d40e2270f0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sh4c8" podUID="fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5" Jan 19 12:09:39.427650 kubelet[2862]: E0119 12:09:39.426894 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:09:42.432693 kubelet[2862]: E0119 12:09:42.430808 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:09:45.445960 containerd[1618]: time="2026-01-19T12:09:45.444890931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-9qp6h,Uid:c862d411-4d4f-4a97-b967-49e1eb15851d,Namespace:calico-system,Attempt:0,}" Jan 19 12:09:46.143078 containerd[1618]: time="2026-01-19T12:09:46.143020690Z" level=error msg="Failed to destroy network for sandbox \"74b5f42bdadf04414d033b9d4934f940d3c26ea786d1da3bca8f4e16eefc94fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:46.150070 systemd[1]: run-netns-cni\x2db8c758cb\x2d6f20\x2d0585\x2db3ef\x2d22f6dd85617d.mount: Deactivated successfully. Jan 19 12:09:46.174637 containerd[1618]: time="2026-01-19T12:09:46.173761012Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-9qp6h,Uid:c862d411-4d4f-4a97-b967-49e1eb15851d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"74b5f42bdadf04414d033b9d4934f940d3c26ea786d1da3bca8f4e16eefc94fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:46.178429 kubelet[2862]: E0119 12:09:46.178046 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74b5f42bdadf04414d033b9d4934f940d3c26ea786d1da3bca8f4e16eefc94fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:46.179957 kubelet[2862]: E0119 12:09:46.178881 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74b5f42bdadf04414d033b9d4934f940d3c26ea786d1da3bca8f4e16eefc94fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-9qp6h" Jan 19 12:09:46.183008 kubelet[2862]: E0119 12:09:46.182025 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74b5f42bdadf04414d033b9d4934f940d3c26ea786d1da3bca8f4e16eefc94fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-9qp6h" Jan 19 12:09:46.183008 kubelet[2862]: E0119 12:09:46.182941 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-9qp6h_calico-system(c862d411-4d4f-4a97-b967-49e1eb15851d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-9qp6h_calico-system(c862d411-4d4f-4a97-b967-49e1eb15851d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74b5f42bdadf04414d033b9d4934f940d3c26ea786d1da3bca8f4e16eefc94fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-9qp6h" podUID="c862d411-4d4f-4a97-b967-49e1eb15851d" Jan 19 12:09:47.437521 kubelet[2862]: E0119 12:09:47.436743 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:09:47.441831 containerd[1618]: time="2026-01-19T12:09:47.439477699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5scwx,Uid:81e1841f-312c-44d7-b340-4b8d02b8d37b,Namespace:kube-system,Attempt:0,}" Jan 19 12:09:47.458951 containerd[1618]: time="2026-01-19T12:09:47.458896103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-975cd56bc-wkczn,Uid:75367eb7-d5a3-4610-be00-cbd5e7d7db9d,Namespace:calico-system,Attempt:0,}" Jan 19 12:09:47.950041 containerd[1618]: time="2026-01-19T12:09:47.949559865Z" level=error msg="Failed to destroy network for sandbox \"996416cc9b3a5a7667f2ff89727fac5bbe901cf91481153f1c39ec4040a3588e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:47.960872 systemd[1]: run-netns-cni\x2dc52bcb4f\x2de972\x2d7bcc\x2d3036\x2d93b9258dd8ef.mount: Deactivated successfully. Jan 19 12:09:47.977997 containerd[1618]: time="2026-01-19T12:09:47.977954864Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5scwx,Uid:81e1841f-312c-44d7-b340-4b8d02b8d37b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"996416cc9b3a5a7667f2ff89727fac5bbe901cf91481153f1c39ec4040a3588e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:47.984936 kubelet[2862]: E0119 12:09:47.981426 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"996416cc9b3a5a7667f2ff89727fac5bbe901cf91481153f1c39ec4040a3588e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:47.984936 kubelet[2862]: E0119 12:09:47.983762 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"996416cc9b3a5a7667f2ff89727fac5bbe901cf91481153f1c39ec4040a3588e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5scwx" Jan 19 12:09:47.984936 kubelet[2862]: E0119 12:09:47.983793 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"996416cc9b3a5a7667f2ff89727fac5bbe901cf91481153f1c39ec4040a3588e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5scwx" Jan 19 12:09:47.985441 kubelet[2862]: E0119 12:09:47.983841 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-5scwx_kube-system(81e1841f-312c-44d7-b340-4b8d02b8d37b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-5scwx_kube-system(81e1841f-312c-44d7-b340-4b8d02b8d37b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"996416cc9b3a5a7667f2ff89727fac5bbe901cf91481153f1c39ec4040a3588e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-5scwx" podUID="81e1841f-312c-44d7-b340-4b8d02b8d37b" Jan 19 12:09:48.132919 containerd[1618]: time="2026-01-19T12:09:48.132869996Z" level=error msg="Failed to destroy network for sandbox \"824b843087dd7bc4421c1c90a92e2e2407e2b29bf782d4fe37e3a815417425aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:48.142851 systemd[1]: run-netns-cni\x2de641aa24\x2df73f\x2dfb08\x2d2e77\x2dc7a7c7de0e6a.mount: Deactivated successfully. Jan 19 12:09:48.155851 containerd[1618]: time="2026-01-19T12:09:48.155741851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-975cd56bc-wkczn,Uid:75367eb7-d5a3-4610-be00-cbd5e7d7db9d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"824b843087dd7bc4421c1c90a92e2e2407e2b29bf782d4fe37e3a815417425aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:48.157735 kubelet[2862]: E0119 12:09:48.157061 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"824b843087dd7bc4421c1c90a92e2e2407e2b29bf782d4fe37e3a815417425aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:48.158471 kubelet[2862]: E0119 12:09:48.157842 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"824b843087dd7bc4421c1c90a92e2e2407e2b29bf782d4fe37e3a815417425aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-975cd56bc-wkczn" Jan 19 12:09:48.158471 kubelet[2862]: E0119 12:09:48.157866 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"824b843087dd7bc4421c1c90a92e2e2407e2b29bf782d4fe37e3a815417425aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-975cd56bc-wkczn" Jan 19 12:09:48.158471 kubelet[2862]: E0119 12:09:48.157914 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-975cd56bc-wkczn_calico-system(75367eb7-d5a3-4610-be00-cbd5e7d7db9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-975cd56bc-wkczn_calico-system(75367eb7-d5a3-4610-be00-cbd5e7d7db9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"824b843087dd7bc4421c1c90a92e2e2407e2b29bf782d4fe37e3a815417425aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-975cd56bc-wkczn" podUID="75367eb7-d5a3-4610-be00-cbd5e7d7db9d" Jan 19 12:09:48.439534 containerd[1618]: time="2026-01-19T12:09:48.438873683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595df97b5c-6lb2q,Uid:6b9860e0-5e66-444d-bc4f-e74b59e19721,Namespace:calico-apiserver,Attempt:0,}" Jan 19 12:09:48.454063 containerd[1618]: time="2026-01-19T12:09:48.453993183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6498f87f67-blq74,Uid:1474234e-7956-446d-ab4f-5e4881b3e006,Namespace:calico-system,Attempt:0,}" Jan 19 12:09:48.471428 containerd[1618]: time="2026-01-19T12:09:48.462478792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sh4c8,Uid:fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5,Namespace:calico-system,Attempt:0,}" Jan 19 12:09:49.166083 containerd[1618]: time="2026-01-19T12:09:49.154886365Z" level=error msg="Failed to destroy network for sandbox \"87a40e69bf987c401651a73e7691543aa82e0af7297566f881f2e2cb82178b81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:49.162557 systemd[1]: run-netns-cni\x2defaefefa\x2d1c66\x2ded09\x2df834\x2d37bb26ad21e4.mount: Deactivated successfully. Jan 19 12:09:49.192081 containerd[1618]: time="2026-01-19T12:09:49.192035404Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595df97b5c-6lb2q,Uid:6b9860e0-5e66-444d-bc4f-e74b59e19721,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"87a40e69bf987c401651a73e7691543aa82e0af7297566f881f2e2cb82178b81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:49.199831 kubelet[2862]: E0119 12:09:49.196944 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87a40e69bf987c401651a73e7691543aa82e0af7297566f881f2e2cb82178b81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:49.199831 kubelet[2862]: E0119 12:09:49.196997 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87a40e69bf987c401651a73e7691543aa82e0af7297566f881f2e2cb82178b81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-595df97b5c-6lb2q" Jan 19 12:09:49.199831 kubelet[2862]: E0119 12:09:49.197016 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87a40e69bf987c401651a73e7691543aa82e0af7297566f881f2e2cb82178b81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-595df97b5c-6lb2q" Jan 19 12:09:49.201054 kubelet[2862]: E0119 12:09:49.197065 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-595df97b5c-6lb2q_calico-apiserver(6b9860e0-5e66-444d-bc4f-e74b59e19721)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-595df97b5c-6lb2q_calico-apiserver(6b9860e0-5e66-444d-bc4f-e74b59e19721)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87a40e69bf987c401651a73e7691543aa82e0af7297566f881f2e2cb82178b81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-595df97b5c-6lb2q" podUID="6b9860e0-5e66-444d-bc4f-e74b59e19721" Jan 19 12:09:49.222838 containerd[1618]: time="2026-01-19T12:09:49.222526569Z" level=error msg="Failed to destroy network for sandbox \"d6766c718e2bb2467617994171bd0fe5c6b6f5ed8eefafe806bb84f6a1b63600\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:49.232078 systemd[1]: run-netns-cni\x2dfbed1b72\x2d5ebf\x2d968e\x2d25a8\x2d583c5dfe161f.mount: Deactivated successfully. Jan 19 12:09:49.237879 containerd[1618]: time="2026-01-19T12:09:49.236612418Z" level=error msg="Failed to destroy network for sandbox \"3b245976747de5ae75b47326fcc43af72078383901abf0fce7edf3ae88e65424\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:49.244834 containerd[1618]: time="2026-01-19T12:09:49.241784544Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6498f87f67-blq74,Uid:1474234e-7956-446d-ab4f-5e4881b3e006,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6766c718e2bb2467617994171bd0fe5c6b6f5ed8eefafe806bb84f6a1b63600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:49.248005 systemd[1]: run-netns-cni\x2d3b942201\x2d691e\x2dc75f\x2df2d4\x2dd5393bfe79c8.mount: Deactivated successfully. Jan 19 12:09:49.255893 kubelet[2862]: E0119 12:09:49.254853 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6766c718e2bb2467617994171bd0fe5c6b6f5ed8eefafe806bb84f6a1b63600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:49.255893 kubelet[2862]: E0119 12:09:49.254918 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6766c718e2bb2467617994171bd0fe5c6b6f5ed8eefafe806bb84f6a1b63600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6498f87f67-blq74" Jan 19 12:09:49.255893 kubelet[2862]: E0119 12:09:49.254945 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6766c718e2bb2467617994171bd0fe5c6b6f5ed8eefafe806bb84f6a1b63600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6498f87f67-blq74" Jan 19 12:09:49.256050 kubelet[2862]: E0119 12:09:49.255002 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6498f87f67-blq74_calico-system(1474234e-7956-446d-ab4f-5e4881b3e006)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6498f87f67-blq74_calico-system(1474234e-7956-446d-ab4f-5e4881b3e006)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6766c718e2bb2467617994171bd0fe5c6b6f5ed8eefafe806bb84f6a1b63600\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6498f87f67-blq74" podUID="1474234e-7956-446d-ab4f-5e4881b3e006" Jan 19 12:09:49.268805 containerd[1618]: time="2026-01-19T12:09:49.265557875Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sh4c8,Uid:fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b245976747de5ae75b47326fcc43af72078383901abf0fce7edf3ae88e65424\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:49.275996 kubelet[2862]: E0119 12:09:49.270530 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b245976747de5ae75b47326fcc43af72078383901abf0fce7edf3ae88e65424\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:49.275996 kubelet[2862]: E0119 12:09:49.275835 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b245976747de5ae75b47326fcc43af72078383901abf0fce7edf3ae88e65424\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sh4c8" Jan 19 12:09:49.282793 kubelet[2862]: E0119 12:09:49.281730 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b245976747de5ae75b47326fcc43af72078383901abf0fce7edf3ae88e65424\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sh4c8" Jan 19 12:09:49.282793 kubelet[2862]: E0119 12:09:49.281810 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sh4c8_calico-system(fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sh4c8_calico-system(fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b245976747de5ae75b47326fcc43af72078383901abf0fce7edf3ae88e65424\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sh4c8" podUID="fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5" Jan 19 12:09:49.448788 kubelet[2862]: E0119 12:09:49.446623 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:09:49.451750 containerd[1618]: time="2026-01-19T12:09:49.450051726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595df97b5c-7r9mw,Uid:57042a5e-1534-485c-abeb-75f1e57f8cf0,Namespace:calico-apiserver,Attempt:0,}" Jan 19 12:09:49.451750 containerd[1618]: time="2026-01-19T12:09:49.450996850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qcffj,Uid:a49f9c26-4d74-485a-bf88-290c9f9a5235,Namespace:kube-system,Attempt:0,}" Jan 19 12:09:49.932541 containerd[1618]: time="2026-01-19T12:09:49.927826782Z" level=error msg="Failed to destroy network for sandbox \"fea605eedf4d7419936eabc914579c1ab33b08cc2e40d8d91a2686c36a01083c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:49.933736 systemd[1]: run-netns-cni\x2db5f9c8c8\x2d3099\x2d8952\x2d6691\x2d6d450286fb9f.mount: Deactivated successfully. Jan 19 12:09:49.956859 containerd[1618]: time="2026-01-19T12:09:49.955841630Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qcffj,Uid:a49f9c26-4d74-485a-bf88-290c9f9a5235,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fea605eedf4d7419936eabc914579c1ab33b08cc2e40d8d91a2686c36a01083c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:49.959674 kubelet[2862]: E0119 12:09:49.959475 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fea605eedf4d7419936eabc914579c1ab33b08cc2e40d8d91a2686c36a01083c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:49.959730 kubelet[2862]: E0119 12:09:49.959702 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fea605eedf4d7419936eabc914579c1ab33b08cc2e40d8d91a2686c36a01083c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qcffj" Jan 19 12:09:49.959767 kubelet[2862]: E0119 12:09:49.959730 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fea605eedf4d7419936eabc914579c1ab33b08cc2e40d8d91a2686c36a01083c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qcffj" Jan 19 12:09:49.959981 kubelet[2862]: E0119 12:09:49.959792 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-qcffj_kube-system(a49f9c26-4d74-485a-bf88-290c9f9a5235)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-qcffj_kube-system(a49f9c26-4d74-485a-bf88-290c9f9a5235)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fea605eedf4d7419936eabc914579c1ab33b08cc2e40d8d91a2686c36a01083c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-qcffj" podUID="a49f9c26-4d74-485a-bf88-290c9f9a5235" Jan 19 12:09:50.025766 containerd[1618]: time="2026-01-19T12:09:50.025055875Z" level=error msg="Failed to destroy network for sandbox \"96117fc48957f5e3aa3a099996d6cc91d1ac9eaaefba77766b534a7c3c3bd245\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:50.035916 systemd[1]: run-netns-cni\x2d6fe3f24c\x2da50a\x2d0f1d\x2d5ae5\x2de162be58d7ae.mount: Deactivated successfully. Jan 19 12:09:50.049023 containerd[1618]: time="2026-01-19T12:09:50.047997651Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595df97b5c-7r9mw,Uid:57042a5e-1534-485c-abeb-75f1e57f8cf0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"96117fc48957f5e3aa3a099996d6cc91d1ac9eaaefba77766b534a7c3c3bd245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:50.055073 kubelet[2862]: E0119 12:09:50.054871 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96117fc48957f5e3aa3a099996d6cc91d1ac9eaaefba77766b534a7c3c3bd245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:50.055853 kubelet[2862]: E0119 12:09:50.055086 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96117fc48957f5e3aa3a099996d6cc91d1ac9eaaefba77766b534a7c3c3bd245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-595df97b5c-7r9mw" Jan 19 12:09:50.055853 kubelet[2862]: E0119 12:09:50.055569 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96117fc48957f5e3aa3a099996d6cc91d1ac9eaaefba77766b534a7c3c3bd245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-595df97b5c-7r9mw" Jan 19 12:09:50.055853 kubelet[2862]: E0119 12:09:50.055625 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-595df97b5c-7r9mw_calico-apiserver(57042a5e-1534-485c-abeb-75f1e57f8cf0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-595df97b5c-7r9mw_calico-apiserver(57042a5e-1534-485c-abeb-75f1e57f8cf0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96117fc48957f5e3aa3a099996d6cc91d1ac9eaaefba77766b534a7c3c3bd245\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-595df97b5c-7r9mw" podUID="57042a5e-1534-485c-abeb-75f1e57f8cf0" Jan 19 12:09:57.451589 containerd[1618]: time="2026-01-19T12:09:57.450885981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-9qp6h,Uid:c862d411-4d4f-4a97-b967-49e1eb15851d,Namespace:calico-system,Attempt:0,}" Jan 19 12:09:58.049481 containerd[1618]: time="2026-01-19T12:09:58.045952447Z" level=error msg="Failed to destroy network for sandbox \"c55b55aaa34ccd51cba25b9aa41a37300f946d06d8d6abaa514c3125ae0ebff9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:58.058938 systemd[1]: run-netns-cni\x2d4b375248\x2de329\x2dbb0d\x2dc24d\x2db6ad38734df8.mount: Deactivated successfully. Jan 19 12:09:58.068745 containerd[1618]: time="2026-01-19T12:09:58.068070296Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-9qp6h,Uid:c862d411-4d4f-4a97-b967-49e1eb15851d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c55b55aaa34ccd51cba25b9aa41a37300f946d06d8d6abaa514c3125ae0ebff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:58.071741 kubelet[2862]: E0119 12:09:58.069531 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c55b55aaa34ccd51cba25b9aa41a37300f946d06d8d6abaa514c3125ae0ebff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:09:58.071741 kubelet[2862]: E0119 12:09:58.069580 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c55b55aaa34ccd51cba25b9aa41a37300f946d06d8d6abaa514c3125ae0ebff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-9qp6h" Jan 19 12:09:58.071741 kubelet[2862]: E0119 12:09:58.069599 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c55b55aaa34ccd51cba25b9aa41a37300f946d06d8d6abaa514c3125ae0ebff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-9qp6h" Jan 19 12:09:58.075985 kubelet[2862]: E0119 12:09:58.069644 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-9qp6h_calico-system(c862d411-4d4f-4a97-b967-49e1eb15851d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-9qp6h_calico-system(c862d411-4d4f-4a97-b967-49e1eb15851d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c55b55aaa34ccd51cba25b9aa41a37300f946d06d8d6abaa514c3125ae0ebff9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-9qp6h" podUID="c862d411-4d4f-4a97-b967-49e1eb15851d" Jan 19 12:09:58.393620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2339815210.mount: Deactivated successfully. Jan 19 12:09:58.493898 containerd[1618]: time="2026-01-19T12:09:58.493845884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:09:58.515814 containerd[1618]: time="2026-01-19T12:09:58.500926481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 19 12:09:58.522577 containerd[1618]: time="2026-01-19T12:09:58.519604043Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:09:58.530071 containerd[1618]: time="2026-01-19T12:09:58.526729242Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:09:58.539050 containerd[1618]: time="2026-01-19T12:09:58.537910847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 37.340014135s" Jan 19 12:09:58.539050 containerd[1618]: time="2026-01-19T12:09:58.538833330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 19 12:09:58.652961 containerd[1618]: time="2026-01-19T12:09:58.649821056Z" level=info msg="CreateContainer within sandbox \"1f718ce4f34d86d9bcbcb0e24eee4742ac8802cdae241b83bd778942893afca4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 19 12:09:58.730686 containerd[1618]: time="2026-01-19T12:09:58.730088071Z" level=info msg="Container 3dd1fb949949b497e286f9af8e8c4e74eca28d75f7425ce7cb4db3a0674c1d2f: CDI devices from CRI Config.CDIDevices: []" Jan 19 12:09:58.762615 containerd[1618]: time="2026-01-19T12:09:58.761038639Z" level=info msg="CreateContainer within sandbox \"1f718ce4f34d86d9bcbcb0e24eee4742ac8802cdae241b83bd778942893afca4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3dd1fb949949b497e286f9af8e8c4e74eca28d75f7425ce7cb4db3a0674c1d2f\"" Jan 19 12:09:58.767588 containerd[1618]: time="2026-01-19T12:09:58.765941666Z" level=info msg="StartContainer for \"3dd1fb949949b497e286f9af8e8c4e74eca28d75f7425ce7cb4db3a0674c1d2f\"" Jan 19 12:09:58.769948 containerd[1618]: time="2026-01-19T12:09:58.769927902Z" level=info msg="connecting to shim 3dd1fb949949b497e286f9af8e8c4e74eca28d75f7425ce7cb4db3a0674c1d2f" address="unix:///run/containerd/s/3447015fc15e0140b51f1e925277d7c818a57ec9fc8391dff98ab4ec53c751a5" protocol=ttrpc version=3 Jan 19 12:09:58.965733 systemd[1]: Started cri-containerd-3dd1fb949949b497e286f9af8e8c4e74eca28d75f7425ce7cb4db3a0674c1d2f.scope - libcontainer container 3dd1fb949949b497e286f9af8e8c4e74eca28d75f7425ce7cb4db3a0674c1d2f. Jan 19 12:09:59.200606 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 19 12:09:59.218082 kernel: audit: type=1334 audit(1768824599.164:572): prog-id=172 op=LOAD Jan 19 12:09:59.218670 kernel: audit: type=1300 audit(1768824599.164:572): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3454 pid=4514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:59.164000 audit: BPF prog-id=172 op=LOAD Jan 19 12:09:59.164000 audit[4514]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3454 pid=4514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:59.297840 kernel: audit: type=1327 audit(1768824599.164:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364643166623934393934396234393765323836663961663865386334 Jan 19 12:09:59.164000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364643166623934393934396234393765323836663961663865386334 Jan 19 12:09:59.359971 kernel: audit: type=1334 audit(1768824599.164:573): prog-id=173 op=LOAD Jan 19 12:09:59.164000 audit: BPF prog-id=173 op=LOAD Jan 19 12:09:59.164000 audit[4514]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3454 pid=4514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:59.456923 kernel: audit: type=1300 audit(1768824599.164:573): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3454 pid=4514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:59.164000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364643166623934393934396234393765323836663961663865386334 Jan 19 12:09:59.534614 kernel: audit: type=1327 audit(1768824599.164:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364643166623934393934396234393765323836663961663865386334 Jan 19 12:09:59.563028 kernel: audit: type=1334 audit(1768824599.164:574): prog-id=173 op=UNLOAD Jan 19 12:09:59.164000 audit: BPF prog-id=173 op=UNLOAD Jan 19 12:09:59.164000 audit[4514]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3454 pid=4514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:59.648233 kernel: audit: type=1300 audit(1768824599.164:574): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3454 pid=4514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:59.164000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364643166623934393934396234393765323836663961663865386334 Jan 19 12:09:59.724004 kernel: audit: type=1327 audit(1768824599.164:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364643166623934393934396234393765323836663961663865386334 Jan 19 12:09:59.164000 audit: BPF prog-id=172 op=UNLOAD Jan 19 12:09:59.753500 kernel: audit: type=1334 audit(1768824599.164:575): prog-id=172 op=UNLOAD Jan 19 12:09:59.164000 audit[4514]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3454 pid=4514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:59.164000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364643166623934393934396234393765323836663961663865386334 Jan 19 12:09:59.164000 audit: BPF prog-id=174 op=LOAD Jan 19 12:09:59.164000 audit[4514]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3454 pid=4514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:09:59.164000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364643166623934393934396234393765323836663961663865386334 Jan 19 12:09:59.855073 containerd[1618]: time="2026-01-19T12:09:59.854913967Z" level=info msg="StartContainer for \"3dd1fb949949b497e286f9af8e8c4e74eca28d75f7425ce7cb4db3a0674c1d2f\" returns successfully" Jan 19 12:09:59.912738 kubelet[2862]: E0119 12:09:59.909567 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:10:00.081914 kubelet[2862]: I0119 12:10:00.080902 2862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nh77m" podStartSLOduration=2.852155771 podStartE2EDuration="1m1.080882256s" podCreationTimestamp="2026-01-19 12:08:59 +0000 UTC" firstStartedPulling="2026-01-19 12:09:00.325780339 +0000 UTC m=+33.576313091" lastFinishedPulling="2026-01-19 12:09:58.554506824 +0000 UTC m=+91.805039576" observedRunningTime="2026-01-19 12:10:00.065756584 +0000 UTC m=+93.316289336" watchObservedRunningTime="2026-01-19 12:10:00.080882256 +0000 UTC m=+93.331415008" Jan 19 12:10:00.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.55:22-10.0.0.1:54898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:00.422076 systemd[1]: Started sshd@7-10.0.0.55:22-10.0.0.1:54898.service - OpenSSH per-connection server daemon (10.0.0.1:54898). Jan 19 12:10:00.450905 kubelet[2862]: E0119 12:10:00.450621 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:10:00.462549 kubelet[2862]: E0119 12:10:00.461058 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:10:00.487851 containerd[1618]: time="2026-01-19T12:10:00.487052984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5scwx,Uid:81e1841f-312c-44d7-b340-4b8d02b8d37b,Namespace:kube-system,Attempt:0,}" Jan 19 12:10:00.489937 containerd[1618]: time="2026-01-19T12:10:00.488978861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qcffj,Uid:a49f9c26-4d74-485a-bf88-290c9f9a5235,Namespace:kube-system,Attempt:0,}" Jan 19 12:10:01.024582 kubelet[2862]: E0119 12:10:01.010956 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:10:01.244088 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 19 12:10:01.246913 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 19 12:10:01.327000 audit[4563]: USER_ACCT pid=4563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:01.333003 sshd[4563]: Accepted publickey for core from 10.0.0.1 port 54898 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:10:01.334000 audit[4563]: CRED_ACQ pid=4563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:01.334000 audit[4563]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9cf44a60 a2=3 a3=0 items=0 ppid=1 pid=4563 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:01.334000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:10:01.337731 sshd-session[4563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:10:01.359874 systemd-logind[1598]: New session 9 of user core. Jan 19 12:10:01.372705 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 19 12:10:01.390000 audit[4563]: USER_START pid=4563 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:01.397000 audit[4631]: CRED_ACQ pid=4631 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:01.566014 containerd[1618]: time="2026-01-19T12:10:01.564822641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6498f87f67-blq74,Uid:1474234e-7956-446d-ab4f-5e4881b3e006,Namespace:calico-system,Attempt:0,}" Jan 19 12:10:01.587668 containerd[1618]: time="2026-01-19T12:10:01.576677933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595df97b5c-7r9mw,Uid:57042a5e-1534-485c-abeb-75f1e57f8cf0,Namespace:calico-apiserver,Attempt:0,}" Jan 19 12:10:02.058825 containerd[1618]: time="2026-01-19T12:10:02.056854389Z" level=error msg="Failed to destroy network for sandbox \"30945a553246aa33bb61ab75baabaf9a758d7010423428b69e06eb7cbfc2a1da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:10:02.065712 systemd[1]: run-netns-cni\x2dad1ef290\x2d4175\x2d3e8f\x2d863a\x2d20285e6ecebb.mount: Deactivated successfully. Jan 19 12:10:02.139933 containerd[1618]: time="2026-01-19T12:10:02.133697461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qcffj,Uid:a49f9c26-4d74-485a-bf88-290c9f9a5235,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"30945a553246aa33bb61ab75baabaf9a758d7010423428b69e06eb7cbfc2a1da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:10:02.144820 kubelet[2862]: E0119 12:10:02.134703 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30945a553246aa33bb61ab75baabaf9a758d7010423428b69e06eb7cbfc2a1da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:10:02.144820 kubelet[2862]: E0119 12:10:02.134761 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30945a553246aa33bb61ab75baabaf9a758d7010423428b69e06eb7cbfc2a1da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qcffj" Jan 19 12:10:02.144820 kubelet[2862]: E0119 12:10:02.134787 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30945a553246aa33bb61ab75baabaf9a758d7010423428b69e06eb7cbfc2a1da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qcffj" Jan 19 12:10:02.145760 kubelet[2862]: E0119 12:10:02.134844 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-qcffj_kube-system(a49f9c26-4d74-485a-bf88-290c9f9a5235)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-qcffj_kube-system(a49f9c26-4d74-485a-bf88-290c9f9a5235)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30945a553246aa33bb61ab75baabaf9a758d7010423428b69e06eb7cbfc2a1da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-qcffj" podUID="a49f9c26-4d74-485a-bf88-290c9f9a5235" Jan 19 12:10:02.234847 containerd[1618]: time="2026-01-19T12:10:02.232835552Z" level=error msg="Failed to destroy network for sandbox \"c55c44aff12950aaf0af8875d3699149d6bedb7feef0a5fa41c29ad5244b0a48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:10:02.242563 systemd[1]: run-netns-cni\x2db84fae2f\x2d22c3\x2dcb89\x2d53bb\x2d318114d26968.mount: Deactivated successfully. Jan 19 12:10:02.248052 sshd[4631]: Connection closed by 10.0.0.1 port 54898 Jan 19 12:10:02.247073 sshd-session[4563]: pam_unix(sshd:session): session closed for user core Jan 19 12:10:02.287000 audit[4563]: USER_END pid=4563 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:02.289000 audit[4563]: CRED_DISP pid=4563 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:02.297766 containerd[1618]: time="2026-01-19T12:10:02.297078931Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5scwx,Uid:81e1841f-312c-44d7-b340-4b8d02b8d37b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c55c44aff12950aaf0af8875d3699149d6bedb7feef0a5fa41c29ad5244b0a48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:10:02.309079 kubelet[2862]: E0119 12:10:02.305890 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c55c44aff12950aaf0af8875d3699149d6bedb7feef0a5fa41c29ad5244b0a48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:10:02.309079 kubelet[2862]: E0119 12:10:02.305964 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c55c44aff12950aaf0af8875d3699149d6bedb7feef0a5fa41c29ad5244b0a48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5scwx" Jan 19 12:10:02.309079 kubelet[2862]: E0119 12:10:02.305990 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c55c44aff12950aaf0af8875d3699149d6bedb7feef0a5fa41c29ad5244b0a48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5scwx" Jan 19 12:10:02.312687 kubelet[2862]: E0119 12:10:02.306053 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-5scwx_kube-system(81e1841f-312c-44d7-b340-4b8d02b8d37b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-5scwx_kube-system(81e1841f-312c-44d7-b340-4b8d02b8d37b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c55c44aff12950aaf0af8875d3699149d6bedb7feef0a5fa41c29ad5244b0a48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-5scwx" podUID="81e1841f-312c-44d7-b340-4b8d02b8d37b" Jan 19 12:10:02.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.55:22-10.0.0.1:54898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:02.319066 systemd[1]: sshd@7-10.0.0.55:22-10.0.0.1:54898.service: Deactivated successfully. Jan 19 12:10:02.336563 systemd[1]: session-9.scope: Deactivated successfully. Jan 19 12:10:02.348690 systemd-logind[1598]: Session 9 logged out. Waiting for processes to exit. Jan 19 12:10:02.361988 systemd-logind[1598]: Removed session 9. Jan 19 12:10:02.442769 containerd[1618]: time="2026-01-19T12:10:02.441763890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-975cd56bc-wkczn,Uid:75367eb7-d5a3-4610-be00-cbd5e7d7db9d,Namespace:calico-system,Attempt:0,}" Jan 19 12:10:02.463040 containerd[1618]: time="2026-01-19T12:10:02.458050848Z" level=error msg="Failed to destroy network for sandbox \"37604881fd2dde6b84b7a9b9453c2b3097174eaf2b1793f51ce4f11341be0fa6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:10:02.471874 systemd[1]: run-netns-cni\x2d75dbab2c\x2de94d\x2ddacd\x2d4cd9\x2de4a4a5a5c972.mount: Deactivated successfully. Jan 19 12:10:02.507074 containerd[1618]: time="2026-01-19T12:10:02.506868446Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595df97b5c-7r9mw,Uid:57042a5e-1534-485c-abeb-75f1e57f8cf0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"37604881fd2dde6b84b7a9b9453c2b3097174eaf2b1793f51ce4f11341be0fa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:10:02.517625 kubelet[2862]: E0119 12:10:02.517501 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37604881fd2dde6b84b7a9b9453c2b3097174eaf2b1793f51ce4f11341be0fa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:10:02.517625 kubelet[2862]: E0119 12:10:02.517550 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37604881fd2dde6b84b7a9b9453c2b3097174eaf2b1793f51ce4f11341be0fa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-595df97b5c-7r9mw" Jan 19 12:10:02.517625 kubelet[2862]: E0119 12:10:02.517569 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37604881fd2dde6b84b7a9b9453c2b3097174eaf2b1793f51ce4f11341be0fa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-595df97b5c-7r9mw" Jan 19 12:10:02.517868 kubelet[2862]: E0119 12:10:02.517611 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-595df97b5c-7r9mw_calico-apiserver(57042a5e-1534-485c-abeb-75f1e57f8cf0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-595df97b5c-7r9mw_calico-apiserver(57042a5e-1534-485c-abeb-75f1e57f8cf0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37604881fd2dde6b84b7a9b9453c2b3097174eaf2b1793f51ce4f11341be0fa6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-595df97b5c-7r9mw" podUID="57042a5e-1534-485c-abeb-75f1e57f8cf0" Jan 19 12:10:03.075597 containerd[1618]: time="2026-01-19T12:10:03.074073703Z" level=error msg="Failed to destroy network for sandbox \"20a5d13d08dfea3b3527d49dcecfa082cad416594dd4d8e2f57ddec441c32d8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:10:03.098791 systemd[1]: run-netns-cni\x2d7db02014\x2d29c1\x2d7d68\x2d39b5\x2d3c7dba642559.mount: Deactivated successfully. Jan 19 12:10:03.136671 containerd[1618]: time="2026-01-19T12:10:03.136631273Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6498f87f67-blq74,Uid:1474234e-7956-446d-ab4f-5e4881b3e006,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"20a5d13d08dfea3b3527d49dcecfa082cad416594dd4d8e2f57ddec441c32d8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:10:03.146668 kubelet[2862]: E0119 12:10:03.143740 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20a5d13d08dfea3b3527d49dcecfa082cad416594dd4d8e2f57ddec441c32d8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:10:03.146668 kubelet[2862]: E0119 12:10:03.143797 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20a5d13d08dfea3b3527d49dcecfa082cad416594dd4d8e2f57ddec441c32d8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6498f87f67-blq74" Jan 19 12:10:03.146668 kubelet[2862]: E0119 12:10:03.143817 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20a5d13d08dfea3b3527d49dcecfa082cad416594dd4d8e2f57ddec441c32d8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6498f87f67-blq74" Jan 19 12:10:03.147684 kubelet[2862]: E0119 12:10:03.143859 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6498f87f67-blq74_calico-system(1474234e-7956-446d-ab4f-5e4881b3e006)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6498f87f67-blq74_calico-system(1474234e-7956-446d-ab4f-5e4881b3e006)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20a5d13d08dfea3b3527d49dcecfa082cad416594dd4d8e2f57ddec441c32d8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6498f87f67-blq74" podUID="1474234e-7956-446d-ab4f-5e4881b3e006" Jan 19 12:10:03.484040 containerd[1618]: time="2026-01-19T12:10:03.480659616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sh4c8,Uid:fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5,Namespace:calico-system,Attempt:0,}" Jan 19 12:10:03.500934 containerd[1618]: time="2026-01-19T12:10:03.500570333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595df97b5c-6lb2q,Uid:6b9860e0-5e66-444d-bc4f-e74b59e19721,Namespace:calico-apiserver,Attempt:0,}" Jan 19 12:10:04.317804 kubelet[2862]: I0119 12:10:04.316842 2862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7qrs\" (UniqueName: \"kubernetes.io/projected/1474234e-7956-446d-ab4f-5e4881b3e006-kube-api-access-t7qrs\") pod \"1474234e-7956-446d-ab4f-5e4881b3e006\" (UID: \"1474234e-7956-446d-ab4f-5e4881b3e006\") " Jan 19 12:10:04.317804 kubelet[2862]: I0119 12:10:04.317057 2862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1474234e-7956-446d-ab4f-5e4881b3e006-whisker-backend-key-pair\") pod \"1474234e-7956-446d-ab4f-5e4881b3e006\" (UID: \"1474234e-7956-446d-ab4f-5e4881b3e006\") " Jan 19 12:10:04.317804 kubelet[2862]: I0119 12:10:04.317075 2862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1474234e-7956-446d-ab4f-5e4881b3e006-whisker-ca-bundle\") pod \"1474234e-7956-446d-ab4f-5e4881b3e006\" (UID: \"1474234e-7956-446d-ab4f-5e4881b3e006\") " Jan 19 12:10:04.318820 kubelet[2862]: I0119 12:10:04.318001 2862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1474234e-7956-446d-ab4f-5e4881b3e006-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1474234e-7956-446d-ab4f-5e4881b3e006" (UID: "1474234e-7956-446d-ab4f-5e4881b3e006"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 19 12:10:04.420854 kubelet[2862]: I0119 12:10:04.419863 2862 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1474234e-7956-446d-ab4f-5e4881b3e006-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 19 12:10:04.423759 systemd[1]: var-lib-kubelet-pods-1474234e\x2d7956\x2d446d\x2dab4f\x2d5e4881b3e006-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt7qrs.mount: Deactivated successfully. Jan 19 12:10:04.438654 kubelet[2862]: I0119 12:10:04.438617 2862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1474234e-7956-446d-ab4f-5e4881b3e006-kube-api-access-t7qrs" (OuterVolumeSpecName: "kube-api-access-t7qrs") pod "1474234e-7956-446d-ab4f-5e4881b3e006" (UID: "1474234e-7956-446d-ab4f-5e4881b3e006"). InnerVolumeSpecName "kube-api-access-t7qrs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 19 12:10:04.448849 systemd[1]: var-lib-kubelet-pods-1474234e\x2d7956\x2d446d\x2dab4f\x2d5e4881b3e006-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 19 12:10:04.458967 kubelet[2862]: I0119 12:10:04.457885 2862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1474234e-7956-446d-ab4f-5e4881b3e006-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1474234e-7956-446d-ab4f-5e4881b3e006" (UID: "1474234e-7956-446d-ab4f-5e4881b3e006"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 19 12:10:04.520707 kubelet[2862]: I0119 12:10:04.520659 2862 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1474234e-7956-446d-ab4f-5e4881b3e006-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 19 12:10:04.521846 kubelet[2862]: I0119 12:10:04.521797 2862 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t7qrs\" (UniqueName: \"kubernetes.io/projected/1474234e-7956-446d-ab4f-5e4881b3e006-kube-api-access-t7qrs\") on node \"localhost\" DevicePath \"\"" Jan 19 12:10:05.150833 systemd[1]: Removed slice kubepods-besteffort-pod1474234e_7956_446d_ab4f_5e4881b3e006.slice - libcontainer container kubepods-besteffort-pod1474234e_7956_446d_ab4f_5e4881b3e006.slice. Jan 19 12:10:05.440739 kubelet[2862]: E0119 12:10:05.440650 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:10:05.469863 kubelet[2862]: I0119 12:10:05.467067 2862 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1474234e-7956-446d-ab4f-5e4881b3e006" path="/var/lib/kubelet/pods/1474234e-7956-446d-ab4f-5e4881b3e006/volumes" Jan 19 12:10:05.508617 systemd[1]: Created slice kubepods-besteffort-podacf44a01_9bd4_43fa_8dda_bec90148f2fd.slice - libcontainer container kubepods-besteffort-podacf44a01_9bd4_43fa_8dda_bec90148f2fd.slice. Jan 19 12:10:05.566082 kubelet[2862]: I0119 12:10:05.562705 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9bvr\" (UniqueName: \"kubernetes.io/projected/acf44a01-9bd4-43fa-8dda-bec90148f2fd-kube-api-access-m9bvr\") pod \"whisker-6c46bc687f-z2lmj\" (UID: \"acf44a01-9bd4-43fa-8dda-bec90148f2fd\") " pod="calico-system/whisker-6c46bc687f-z2lmj" Jan 19 12:10:05.566082 kubelet[2862]: I0119 12:10:05.562765 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/acf44a01-9bd4-43fa-8dda-bec90148f2fd-whisker-backend-key-pair\") pod \"whisker-6c46bc687f-z2lmj\" (UID: \"acf44a01-9bd4-43fa-8dda-bec90148f2fd\") " pod="calico-system/whisker-6c46bc687f-z2lmj" Jan 19 12:10:05.566082 kubelet[2862]: I0119 12:10:05.562788 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acf44a01-9bd4-43fa-8dda-bec90148f2fd-whisker-ca-bundle\") pod \"whisker-6c46bc687f-z2lmj\" (UID: \"acf44a01-9bd4-43fa-8dda-bec90148f2fd\") " pod="calico-system/whisker-6c46bc687f-z2lmj" Jan 19 12:10:05.776891 systemd-networkd[1519]: cali761c95c0fa4: Link UP Jan 19 12:10:05.786934 systemd-networkd[1519]: cali761c95c0fa4: Gained carrier Jan 19 12:10:05.936037 containerd[1618]: 2026-01-19 12:10:04.121 [INFO][4793] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 19 12:10:05.936037 containerd[1618]: 2026-01-19 12:10:04.286 [INFO][4793] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--sh4c8-eth0 csi-node-driver- calico-system fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5 803 0 2026-01-19 12:08:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-sh4c8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali761c95c0fa4 [] [] }} ContainerID="27bdae538b49a09b793d741a01c838a4078eea110a699ab94ea94f0cfa620c33" Namespace="calico-system" Pod="csi-node-driver-sh4c8" WorkloadEndpoint="localhost-k8s-csi--node--driver--sh4c8-" Jan 19 12:10:05.936037 containerd[1618]: 2026-01-19 12:10:04.287 [INFO][4793] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="27bdae538b49a09b793d741a01c838a4078eea110a699ab94ea94f0cfa620c33" Namespace="calico-system" Pod="csi-node-driver-sh4c8" WorkloadEndpoint="localhost-k8s-csi--node--driver--sh4c8-eth0" Jan 19 12:10:05.936037 containerd[1618]: 2026-01-19 12:10:04.930 [INFO][4837] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="27bdae538b49a09b793d741a01c838a4078eea110a699ab94ea94f0cfa620c33" HandleID="k8s-pod-network.27bdae538b49a09b793d741a01c838a4078eea110a699ab94ea94f0cfa620c33" Workload="localhost-k8s-csi--node--driver--sh4c8-eth0" Jan 19 12:10:05.939580 containerd[1618]: 2026-01-19 12:10:04.941 [INFO][4837] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="27bdae538b49a09b793d741a01c838a4078eea110a699ab94ea94f0cfa620c33" HandleID="k8s-pod-network.27bdae538b49a09b793d741a01c838a4078eea110a699ab94ea94f0cfa620c33" Workload="localhost-k8s-csi--node--driver--sh4c8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f870), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-sh4c8", "timestamp":"2026-01-19 12:10:04.930879737 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 19 12:10:05.939580 containerd[1618]: 2026-01-19 12:10:04.941 [INFO][4837] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 19 12:10:05.939580 containerd[1618]: 2026-01-19 12:10:04.955 [INFO][4837] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 19 12:10:05.939580 containerd[1618]: 2026-01-19 12:10:04.977 [INFO][4837] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 19 12:10:05.939580 containerd[1618]: 2026-01-19 12:10:05.058 [INFO][4837] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.27bdae538b49a09b793d741a01c838a4078eea110a699ab94ea94f0cfa620c33" host="localhost" Jan 19 12:10:05.939580 containerd[1618]: 2026-01-19 12:10:05.165 [INFO][4837] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 19 12:10:05.939580 containerd[1618]: 2026-01-19 12:10:05.221 [INFO][4837] ipam/ipam.go 543: Ran out of existing affine blocks for host host="localhost" Jan 19 12:10:05.939580 containerd[1618]: 2026-01-19 12:10:05.252 [INFO][4837] ipam/ipam.go 560: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="localhost" Jan 19 12:10:05.939580 containerd[1618]: 2026-01-19 12:10:05.305 [INFO][4837] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.88.128/26 Jan 19 12:10:05.939580 containerd[1618]: 2026-01-19 12:10:05.305 [INFO][4837] ipam/ipam.go 572: Found unclaimed block host="localhost" subnet=192.168.88.128/26 Jan 19 12:10:05.939580 containerd[1618]: 2026-01-19 12:10:05.305 [INFO][4837] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="localhost" subnet=192.168.88.128/26 Jan 19 12:10:05.941038 containerd[1618]: 2026-01-19 12:10:05.340 [INFO][4837] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="localhost" subnet=192.168.88.128/26 Jan 19 12:10:05.941038 containerd[1618]: 2026-01-19 12:10:05.343 [INFO][4837] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 19 12:10:05.941038 containerd[1618]: 2026-01-19 12:10:05.371 [INFO][4837] ipam/ipam.go 163: The referenced block doesn't exist, trying to create it cidr=192.168.88.128/26 host="localhost" Jan 19 12:10:05.941038 containerd[1618]: 2026-01-19 12:10:05.410 [INFO][4837] ipam/ipam.go 170: Wrote affinity as pending cidr=192.168.88.128/26 host="localhost" Jan 19 12:10:05.941038 containerd[1618]: 2026-01-19 12:10:05.451 [INFO][4837] ipam/ipam.go 179: Attempting to claim the block cidr=192.168.88.128/26 host="localhost" Jan 19 12:10:05.941038 containerd[1618]: 2026-01-19 12:10:05.451 [INFO][4837] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="localhost" subnet=192.168.88.128/26 Jan 19 12:10:05.941038 containerd[1618]: 2026-01-19 12:10:05.523 [INFO][4837] ipam/ipam_block_reader_writer.go 267: Successfully created block Jan 19 12:10:05.941038 containerd[1618]: 2026-01-19 12:10:05.523 [INFO][4837] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="localhost" subnet=192.168.88.128/26 Jan 19 12:10:05.941038 containerd[1618]: 2026-01-19 12:10:05.551 [INFO][4837] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="localhost" subnet=192.168.88.128/26 Jan 19 12:10:05.941038 containerd[1618]: 2026-01-19 12:10:05.551 [INFO][4837] ipam/ipam.go 607: Block '192.168.88.128/26' has 64 free ips which is more than 1 ips required. host="localhost" subnet=192.168.88.128/26 Jan 19 12:10:05.941038 containerd[1618]: 2026-01-19 12:10:05.551 [INFO][4837] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.27bdae538b49a09b793d741a01c838a4078eea110a699ab94ea94f0cfa620c33" host="localhost" Jan 19 12:10:05.941038 containerd[1618]: 2026-01-19 12:10:05.573 [INFO][4837] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.27bdae538b49a09b793d741a01c838a4078eea110a699ab94ea94f0cfa620c33 Jan 19 12:10:05.941038 containerd[1618]: 2026-01-19 12:10:05.586 [INFO][4837] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.27bdae538b49a09b793d741a01c838a4078eea110a699ab94ea94f0cfa620c33" host="localhost" Jan 19 12:10:05.941912 containerd[1618]: 2026-01-19 12:10:05.609 [INFO][4837] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.128/26] block=192.168.88.128/26 handle="k8s-pod-network.27bdae538b49a09b793d741a01c838a4078eea110a699ab94ea94f0cfa620c33" host="localhost" Jan 19 12:10:05.941912 containerd[1618]: 2026-01-19 12:10:05.609 [INFO][4837] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.128/26] handle="k8s-pod-network.27bdae538b49a09b793d741a01c838a4078eea110a699ab94ea94f0cfa620c33" host="localhost" Jan 19 12:10:05.941912 containerd[1618]: 2026-01-19 12:10:05.609 [INFO][4837] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 19 12:10:05.941912 containerd[1618]: 2026-01-19 12:10:05.610 [INFO][4837] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.128/26] IPv6=[] ContainerID="27bdae538b49a09b793d741a01c838a4078eea110a699ab94ea94f0cfa620c33" HandleID="k8s-pod-network.27bdae538b49a09b793d741a01c838a4078eea110a699ab94ea94f0cfa620c33" Workload="localhost-k8s-csi--node--driver--sh4c8-eth0" Jan 19 12:10:05.942037 containerd[1618]: 2026-01-19 12:10:05.626 [INFO][4793] cni-plugin/k8s.go 418: Populated endpoint ContainerID="27bdae538b49a09b793d741a01c838a4078eea110a699ab94ea94f0cfa620c33" Namespace="calico-system" Pod="csi-node-driver-sh4c8" WorkloadEndpoint="localhost-k8s-csi--node--driver--sh4c8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sh4c8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 8, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-sh4c8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali761c95c0fa4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:10:05.950889 containerd[1618]: 2026-01-19 12:10:05.627 [INFO][4793] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.128/32] ContainerID="27bdae538b49a09b793d741a01c838a4078eea110a699ab94ea94f0cfa620c33" Namespace="calico-system" Pod="csi-node-driver-sh4c8" WorkloadEndpoint="localhost-k8s-csi--node--driver--sh4c8-eth0" Jan 19 12:10:05.950889 containerd[1618]: 2026-01-19 12:10:05.627 [INFO][4793] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali761c95c0fa4 ContainerID="27bdae538b49a09b793d741a01c838a4078eea110a699ab94ea94f0cfa620c33" Namespace="calico-system" Pod="csi-node-driver-sh4c8" WorkloadEndpoint="localhost-k8s-csi--node--driver--sh4c8-eth0" Jan 19 12:10:05.950889 containerd[1618]: 2026-01-19 12:10:05.800 [INFO][4793] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="27bdae538b49a09b793d741a01c838a4078eea110a699ab94ea94f0cfa620c33" Namespace="calico-system" Pod="csi-node-driver-sh4c8" WorkloadEndpoint="localhost-k8s-csi--node--driver--sh4c8-eth0" Jan 19 12:10:05.950996 containerd[1618]: 2026-01-19 12:10:05.814 [INFO][4793] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="27bdae538b49a09b793d741a01c838a4078eea110a699ab94ea94f0cfa620c33" Namespace="calico-system" Pod="csi-node-driver-sh4c8" WorkloadEndpoint="localhost-k8s-csi--node--driver--sh4c8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sh4c8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 8, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"27bdae538b49a09b793d741a01c838a4078eea110a699ab94ea94f0cfa620c33", Pod:"csi-node-driver-sh4c8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali761c95c0fa4", MAC:"da:69:3e:f4:04:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:10:05.954807 containerd[1618]: 2026-01-19 12:10:05.914 [INFO][4793] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="27bdae538b49a09b793d741a01c838a4078eea110a699ab94ea94f0cfa620c33" Namespace="calico-system" Pod="csi-node-driver-sh4c8" WorkloadEndpoint="localhost-k8s-csi--node--driver--sh4c8-eth0" Jan 19 12:10:06.198897 containerd[1618]: time="2026-01-19T12:10:06.197607893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c46bc687f-z2lmj,Uid:acf44a01-9bd4-43fa-8dda-bec90148f2fd,Namespace:calico-system,Attempt:0,}" Jan 19 12:10:06.221990 systemd-networkd[1519]: cali6cc06625b7c: Link UP Jan 19 12:10:06.229740 systemd-networkd[1519]: cali6cc06625b7c: Gained carrier Jan 19 12:10:06.375676 containerd[1618]: 2026-01-19 12:10:04.090 [INFO][4792] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 19 12:10:06.375676 containerd[1618]: 2026-01-19 12:10:04.379 [INFO][4792] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--595df97b5c--6lb2q-eth0 calico-apiserver-595df97b5c- calico-apiserver 6b9860e0-5e66-444d-bc4f-e74b59e19721 944 0 2026-01-19 12:08:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:595df97b5c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-595df97b5c-6lb2q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6cc06625b7c [] [] }} ContainerID="5feb8f99efb1ae742f94651893f3fc1d0aff7355a4a84f4f8c2be36350f8c663" Namespace="calico-apiserver" Pod="calico-apiserver-595df97b5c-6lb2q" WorkloadEndpoint="localhost-k8s-calico--apiserver--595df97b5c--6lb2q-" Jan 19 12:10:06.375676 containerd[1618]: 2026-01-19 12:10:04.385 [INFO][4792] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5feb8f99efb1ae742f94651893f3fc1d0aff7355a4a84f4f8c2be36350f8c663" Namespace="calico-apiserver" Pod="calico-apiserver-595df97b5c-6lb2q" WorkloadEndpoint="localhost-k8s-calico--apiserver--595df97b5c--6lb2q-eth0" Jan 19 12:10:06.375676 containerd[1618]: 2026-01-19 12:10:04.936 [INFO][4845] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5feb8f99efb1ae742f94651893f3fc1d0aff7355a4a84f4f8c2be36350f8c663" HandleID="k8s-pod-network.5feb8f99efb1ae742f94651893f3fc1d0aff7355a4a84f4f8c2be36350f8c663" Workload="localhost-k8s-calico--apiserver--595df97b5c--6lb2q-eth0" Jan 19 12:10:06.376997 containerd[1618]: 2026-01-19 12:10:04.937 [INFO][4845] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5feb8f99efb1ae742f94651893f3fc1d0aff7355a4a84f4f8c2be36350f8c663" HandleID="k8s-pod-network.5feb8f99efb1ae742f94651893f3fc1d0aff7355a4a84f4f8c2be36350f8c663" Workload="localhost-k8s-calico--apiserver--595df97b5c--6lb2q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bbaa0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-595df97b5c-6lb2q", "timestamp":"2026-01-19 12:10:04.936790086 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 19 12:10:06.376997 containerd[1618]: 2026-01-19 12:10:04.937 [INFO][4845] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 19 12:10:06.376997 containerd[1618]: 2026-01-19 12:10:05.610 [INFO][4845] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 19 12:10:06.376997 containerd[1618]: 2026-01-19 12:10:05.611 [INFO][4845] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 19 12:10:06.376997 containerd[1618]: 2026-01-19 12:10:05.671 [INFO][4845] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5feb8f99efb1ae742f94651893f3fc1d0aff7355a4a84f4f8c2be36350f8c663" host="localhost" Jan 19 12:10:06.376997 containerd[1618]: 2026-01-19 12:10:05.764 [INFO][4845] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 19 12:10:06.376997 containerd[1618]: 2026-01-19 12:10:05.835 [INFO][4845] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 19 12:10:06.376997 containerd[1618]: 2026-01-19 12:10:05.922 [INFO][4845] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 19 12:10:06.376997 containerd[1618]: 2026-01-19 12:10:05.971 [INFO][4845] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 19 12:10:06.376997 containerd[1618]: 2026-01-19 12:10:05.971 [INFO][4845] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5feb8f99efb1ae742f94651893f3fc1d0aff7355a4a84f4f8c2be36350f8c663" host="localhost" Jan 19 12:10:06.378576 containerd[1618]: 2026-01-19 12:10:05.983 [INFO][4845] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5feb8f99efb1ae742f94651893f3fc1d0aff7355a4a84f4f8c2be36350f8c663 Jan 19 12:10:06.378576 containerd[1618]: 2026-01-19 12:10:06.031 [INFO][4845] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5feb8f99efb1ae742f94651893f3fc1d0aff7355a4a84f4f8c2be36350f8c663" host="localhost" Jan 19 12:10:06.378576 containerd[1618]: 2026-01-19 12:10:06.074 [INFO][4845] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5feb8f99efb1ae742f94651893f3fc1d0aff7355a4a84f4f8c2be36350f8c663" host="localhost" Jan 19 12:10:06.378576 containerd[1618]: 2026-01-19 12:10:06.076 [INFO][4845] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5feb8f99efb1ae742f94651893f3fc1d0aff7355a4a84f4f8c2be36350f8c663" host="localhost" Jan 19 12:10:06.378576 containerd[1618]: 2026-01-19 12:10:06.081 [INFO][4845] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 19 12:10:06.378576 containerd[1618]: 2026-01-19 12:10:06.081 [INFO][4845] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5feb8f99efb1ae742f94651893f3fc1d0aff7355a4a84f4f8c2be36350f8c663" HandleID="k8s-pod-network.5feb8f99efb1ae742f94651893f3fc1d0aff7355a4a84f4f8c2be36350f8c663" Workload="localhost-k8s-calico--apiserver--595df97b5c--6lb2q-eth0" Jan 19 12:10:06.378708 containerd[1618]: 2026-01-19 12:10:06.117 [INFO][4792] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5feb8f99efb1ae742f94651893f3fc1d0aff7355a4a84f4f8c2be36350f8c663" Namespace="calico-apiserver" Pod="calico-apiserver-595df97b5c-6lb2q" WorkloadEndpoint="localhost-k8s-calico--apiserver--595df97b5c--6lb2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--595df97b5c--6lb2q-eth0", GenerateName:"calico-apiserver-595df97b5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"6b9860e0-5e66-444d-bc4f-e74b59e19721", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"595df97b5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-595df97b5c-6lb2q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cc06625b7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:10:06.382906 containerd[1618]: 2026-01-19 12:10:06.117 [INFO][4792] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5feb8f99efb1ae742f94651893f3fc1d0aff7355a4a84f4f8c2be36350f8c663" Namespace="calico-apiserver" Pod="calico-apiserver-595df97b5c-6lb2q" WorkloadEndpoint="localhost-k8s-calico--apiserver--595df97b5c--6lb2q-eth0" Jan 19 12:10:06.382906 containerd[1618]: 2026-01-19 12:10:06.117 [INFO][4792] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6cc06625b7c ContainerID="5feb8f99efb1ae742f94651893f3fc1d0aff7355a4a84f4f8c2be36350f8c663" Namespace="calico-apiserver" Pod="calico-apiserver-595df97b5c-6lb2q" WorkloadEndpoint="localhost-k8s-calico--apiserver--595df97b5c--6lb2q-eth0" Jan 19 12:10:06.382906 containerd[1618]: 2026-01-19 12:10:06.218 [INFO][4792] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5feb8f99efb1ae742f94651893f3fc1d0aff7355a4a84f4f8c2be36350f8c663" Namespace="calico-apiserver" Pod="calico-apiserver-595df97b5c-6lb2q" WorkloadEndpoint="localhost-k8s-calico--apiserver--595df97b5c--6lb2q-eth0" Jan 19 12:10:06.382982 containerd[1618]: 2026-01-19 12:10:06.219 [INFO][4792] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5feb8f99efb1ae742f94651893f3fc1d0aff7355a4a84f4f8c2be36350f8c663" Namespace="calico-apiserver" Pod="calico-apiserver-595df97b5c-6lb2q" WorkloadEndpoint="localhost-k8s-calico--apiserver--595df97b5c--6lb2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--595df97b5c--6lb2q-eth0", GenerateName:"calico-apiserver-595df97b5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"6b9860e0-5e66-444d-bc4f-e74b59e19721", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"595df97b5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5feb8f99efb1ae742f94651893f3fc1d0aff7355a4a84f4f8c2be36350f8c663", Pod:"calico-apiserver-595df97b5c-6lb2q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cc06625b7c", MAC:"0e:27:d4:42:08:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:10:06.383656 containerd[1618]: 2026-01-19 12:10:06.359 [INFO][4792] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5feb8f99efb1ae742f94651893f3fc1d0aff7355a4a84f4f8c2be36350f8c663" Namespace="calico-apiserver" Pod="calico-apiserver-595df97b5c-6lb2q" WorkloadEndpoint="localhost-k8s-calico--apiserver--595df97b5c--6lb2q-eth0" Jan 19 12:10:06.464625 containerd[1618]: 2026-01-19 12:10:03.823 [INFO][4755] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6e71ac3ea503cef97a302c705bf0de4832f0def1a23bfe50a3e2b3d362b51f0" Jan 19 12:10:06.464625 containerd[1618]: 2026-01-19 12:10:03.835 [INFO][4755] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b6e71ac3ea503cef97a302c705bf0de4832f0def1a23bfe50a3e2b3d362b51f0" iface="eth0" netns="/var/run/netns/cni-5cc3c000-95c6-877c-be2c-cfec3b432e45" Jan 19 12:10:06.464625 containerd[1618]: 2026-01-19 12:10:03.836 [INFO][4755] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b6e71ac3ea503cef97a302c705bf0de4832f0def1a23bfe50a3e2b3d362b51f0" iface="eth0" netns="/var/run/netns/cni-5cc3c000-95c6-877c-be2c-cfec3b432e45" Jan 19 12:10:06.464625 containerd[1618]: 2026-01-19 12:10:03.839 [INFO][4755] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b6e71ac3ea503cef97a302c705bf0de4832f0def1a23bfe50a3e2b3d362b51f0" iface="eth0" netns="/var/run/netns/cni-5cc3c000-95c6-877c-be2c-cfec3b432e45" Jan 19 12:10:06.464625 containerd[1618]: 2026-01-19 12:10:03.839 [INFO][4755] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6e71ac3ea503cef97a302c705bf0de4832f0def1a23bfe50a3e2b3d362b51f0" Jan 19 12:10:06.464625 containerd[1618]: 2026-01-19 12:10:03.839 [INFO][4755] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6e71ac3ea503cef97a302c705bf0de4832f0def1a23bfe50a3e2b3d362b51f0" Jan 19 12:10:06.464625 containerd[1618]: 2026-01-19 12:10:04.928 [INFO][4821] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b6e71ac3ea503cef97a302c705bf0de4832f0def1a23bfe50a3e2b3d362b51f0" HandleID="k8s-pod-network.b6e71ac3ea503cef97a302c705bf0de4832f0def1a23bfe50a3e2b3d362b51f0" Workload="localhost-k8s-calico--kube--controllers--975cd56bc--wkczn-eth0" Jan 19 12:10:06.464625 containerd[1618]: 2026-01-19 12:10:04.984 [INFO][4821] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 19 12:10:06.464625 containerd[1618]: 2026-01-19 12:10:06.076 [INFO][4821] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 19 12:10:06.464920 containerd[1618]: 2026-01-19 12:10:06.114 [WARNING][4821] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b6e71ac3ea503cef97a302c705bf0de4832f0def1a23bfe50a3e2b3d362b51f0" HandleID="k8s-pod-network.b6e71ac3ea503cef97a302c705bf0de4832f0def1a23bfe50a3e2b3d362b51f0" Workload="localhost-k8s-calico--kube--controllers--975cd56bc--wkczn-eth0" Jan 19 12:10:06.464920 containerd[1618]: 2026-01-19 12:10:06.114 [INFO][4821] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b6e71ac3ea503cef97a302c705bf0de4832f0def1a23bfe50a3e2b3d362b51f0" HandleID="k8s-pod-network.b6e71ac3ea503cef97a302c705bf0de4832f0def1a23bfe50a3e2b3d362b51f0" Workload="localhost-k8s-calico--kube--controllers--975cd56bc--wkczn-eth0" Jan 19 12:10:06.464920 containerd[1618]: 2026-01-19 12:10:06.164 [INFO][4821] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 19 12:10:06.464920 containerd[1618]: 2026-01-19 12:10:06.402 [INFO][4755] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6e71ac3ea503cef97a302c705bf0de4832f0def1a23bfe50a3e2b3d362b51f0" Jan 19 12:10:06.465983 systemd[1]: run-netns-cni\x2d5cc3c000\x2d95c6\x2d877c\x2dbe2c\x2dcfec3b432e45.mount: Deactivated successfully. Jan 19 12:10:06.534069 containerd[1618]: time="2026-01-19T12:10:06.533911775Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-975cd56bc-wkczn,Uid:75367eb7-d5a3-4610-be00-cbd5e7d7db9d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6e71ac3ea503cef97a302c705bf0de4832f0def1a23bfe50a3e2b3d362b51f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:10:06.564022 kubelet[2862]: E0119 12:10:06.563031 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6e71ac3ea503cef97a302c705bf0de4832f0def1a23bfe50a3e2b3d362b51f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:10:06.564022 kubelet[2862]: E0119 12:10:06.563586 2862 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6e71ac3ea503cef97a302c705bf0de4832f0def1a23bfe50a3e2b3d362b51f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-975cd56bc-wkczn" Jan 19 12:10:06.564022 kubelet[2862]: E0119 12:10:06.563622 2862 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6e71ac3ea503cef97a302c705bf0de4832f0def1a23bfe50a3e2b3d362b51f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-975cd56bc-wkczn" Jan 19 12:10:06.571041 kubelet[2862]: E0119 12:10:06.570010 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-975cd56bc-wkczn_calico-system(75367eb7-d5a3-4610-be00-cbd5e7d7db9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-975cd56bc-wkczn_calico-system(75367eb7-d5a3-4610-be00-cbd5e7d7db9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6e71ac3ea503cef97a302c705bf0de4832f0def1a23bfe50a3e2b3d362b51f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-975cd56bc-wkczn" podUID="75367eb7-d5a3-4610-be00-cbd5e7d7db9d" Jan 19 12:10:06.760082 containerd[1618]: time="2026-01-19T12:10:06.753012425Z" level=info msg="connecting to shim 27bdae538b49a09b793d741a01c838a4078eea110a699ab94ea94f0cfa620c33" address="unix:///run/containerd/s/05c19f032faf63fe6eccc7cd8908e351ee60423bd64f7203600dc39235bc4528" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:10:06.804616 containerd[1618]: time="2026-01-19T12:10:06.804573155Z" level=info msg="connecting to shim 5feb8f99efb1ae742f94651893f3fc1d0aff7355a4a84f4f8c2be36350f8c663" address="unix:///run/containerd/s/84505db02ea697aa7ad0474bdd8fee91045adab70001fe8dda86ce8009a4df10" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:10:07.204605 containerd[1618]: time="2026-01-19T12:10:07.204080465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-975cd56bc-wkczn,Uid:75367eb7-d5a3-4610-be00-cbd5e7d7db9d,Namespace:calico-system,Attempt:0,}" Jan 19 12:10:07.394031 kernel: kauditd_printk_skb: 16 callbacks suppressed Jan 19 12:10:07.409570 kernel: audit: type=1130 audit(1768824607.313:586): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.55:22-10.0.0.1:39506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:07.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.55:22-10.0.0.1:39506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:07.314891 systemd[1]: Started sshd@8-10.0.0.55:22-10.0.0.1:39506.service - OpenSSH per-connection server daemon (10.0.0.1:39506). Jan 19 12:10:07.487747 systemd[1]: Started cri-containerd-27bdae538b49a09b793d741a01c838a4078eea110a699ab94ea94f0cfa620c33.scope - libcontainer container 27bdae538b49a09b793d741a01c838a4078eea110a699ab94ea94f0cfa620c33. Jan 19 12:10:07.609835 systemd[1]: Started cri-containerd-5feb8f99efb1ae742f94651893f3fc1d0aff7355a4a84f4f8c2be36350f8c663.scope - libcontainer container 5feb8f99efb1ae742f94651893f3fc1d0aff7355a4a84f4f8c2be36350f8c663. Jan 19 12:10:07.797000 audit: BPF prog-id=175 op=LOAD Jan 19 12:10:07.820948 kernel: audit: type=1334 audit(1768824607.797:587): prog-id=175 op=LOAD Jan 19 12:10:07.822553 systemd-networkd[1519]: cali761c95c0fa4: Gained IPv6LL Jan 19 12:10:07.832000 audit: BPF prog-id=176 op=LOAD Jan 19 12:10:07.871557 kernel: audit: type=1334 audit(1768824607.832:588): prog-id=176 op=LOAD Jan 19 12:10:07.832000 audit[4944]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4913 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:07.881949 systemd-resolved[1294]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 19 12:10:07.959472 kernel: audit: type=1300 audit(1768824607.832:588): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4913 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:08.041749 kernel: audit: type=1327 audit(1768824607.832:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237626461653533386234396130396237393364373431613031633833 Jan 19 12:10:07.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237626461653533386234396130396237393364373431613031633833 Jan 19 12:10:07.992674 systemd-resolved[1294]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 19 12:10:07.995075 sshd-session[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:10:08.042917 sshd[4982]: Accepted publickey for core from 10.0.0.1 port 39506 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:10:08.016955 systemd-logind[1598]: New session 10 of user core. Jan 19 12:10:08.046044 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 19 12:10:08.096011 kernel: audit: type=1334 audit(1768824607.832:589): prog-id=176 op=UNLOAD Jan 19 12:10:07.832000 audit: BPF prog-id=176 op=UNLOAD Jan 19 12:10:08.187659 kernel: audit: type=1300 audit(1768824607.832:589): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4913 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:07.832000 audit[4944]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4913 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:08.171992 systemd-networkd[1519]: cali6cc06625b7c: Gained IPv6LL Jan 19 12:10:07.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237626461653533386234396130396237393364373431613031633833 Jan 19 12:10:08.219572 kernel: audit: type=1327 audit(1768824607.832:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237626461653533386234396130396237393364373431613031633833 Jan 19 12:10:08.219652 kernel: audit: type=1334 audit(1768824607.832:590): prog-id=177 op=LOAD Jan 19 12:10:08.219684 kernel: audit: type=1300 audit(1768824607.832:590): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4913 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:07.832000 audit: BPF prog-id=177 op=LOAD Jan 19 12:10:07.832000 audit[4944]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4913 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:07.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237626461653533386234396130396237393364373431613031633833 Jan 19 12:10:07.832000 audit: BPF prog-id=178 op=LOAD Jan 19 12:10:07.832000 audit[4944]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4913 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:07.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237626461653533386234396130396237393364373431613031633833 Jan 19 12:10:07.832000 audit: BPF prog-id=178 op=UNLOAD Jan 19 12:10:07.832000 audit[4944]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4913 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:07.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237626461653533386234396130396237393364373431613031633833 Jan 19 12:10:07.832000 audit: BPF prog-id=177 op=UNLOAD Jan 19 12:10:07.832000 audit[4944]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4913 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:07.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237626461653533386234396130396237393364373431613031633833 Jan 19 12:10:07.832000 audit: BPF prog-id=179 op=LOAD Jan 19 12:10:07.832000 audit[4944]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4913 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:07.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237626461653533386234396130396237393364373431613031633833 Jan 19 12:10:07.842000 audit: BPF prog-id=180 op=LOAD Jan 19 12:10:07.878000 audit: BPF prog-id=181 op=LOAD Jan 19 12:10:07.878000 audit[4937]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4915 pid=4937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:07.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566656238663939656662316165373432663934363531383933663366 Jan 19 12:10:07.878000 audit: BPF prog-id=181 op=UNLOAD Jan 19 12:10:07.878000 audit[4937]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4915 pid=4937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:07.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566656238663939656662316165373432663934363531383933663366 Jan 19 12:10:07.919000 audit: BPF prog-id=182 op=LOAD Jan 19 12:10:07.919000 audit[4937]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4915 pid=4937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:07.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566656238663939656662316165373432663934363531383933663366 Jan 19 12:10:07.919000 audit: BPF prog-id=183 op=LOAD Jan 19 12:10:07.919000 audit[4937]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4915 pid=4937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:07.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566656238663939656662316165373432663934363531383933663366 Jan 19 12:10:07.919000 audit: BPF prog-id=183 op=UNLOAD Jan 19 12:10:07.919000 audit[4937]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4915 pid=4937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:07.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566656238663939656662316165373432663934363531383933663366 Jan 19 12:10:07.919000 audit: BPF prog-id=182 op=UNLOAD Jan 19 12:10:07.919000 audit[4937]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4915 pid=4937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:07.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566656238663939656662316165373432663934363531383933663366 Jan 19 12:10:07.919000 audit: BPF prog-id=184 op=LOAD Jan 19 12:10:07.919000 audit[4937]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4915 pid=4937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:07.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566656238663939656662316165373432663934363531383933663366 Jan 19 12:10:07.985000 audit[4982]: USER_ACCT pid=4982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:07.990000 audit[4982]: CRED_ACQ pid=4982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:07.991000 audit[4982]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5f710c10 a2=3 a3=0 items=0 ppid=1 pid=4982 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:07.991000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:10:08.061000 audit[4982]: USER_START pid=4982 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:08.070000 audit[5015]: CRED_ACQ pid=5015 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:08.350800 systemd-networkd[1519]: cali5fe2da2ed88: Link UP Jan 19 12:10:08.351025 systemd-networkd[1519]: cali5fe2da2ed88: Gained carrier Jan 19 12:10:08.476629 containerd[1618]: time="2026-01-19T12:10:08.476589340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-9qp6h,Uid:c862d411-4d4f-4a97-b967-49e1eb15851d,Namespace:calico-system,Attempt:0,}" Jan 19 12:10:08.482047 containerd[1618]: 2026-01-19 12:10:06.646 [INFO][4879] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 19 12:10:08.482047 containerd[1618]: 2026-01-19 12:10:06.781 [INFO][4879] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6c46bc687f--z2lmj-eth0 whisker-6c46bc687f- calico-system acf44a01-9bd4-43fa-8dda-bec90148f2fd 1160 0 2026-01-19 12:10:05 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c46bc687f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6c46bc687f-z2lmj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5fe2da2ed88 [] [] }} ContainerID="45540ccd6a28b0ff2c17858c568ad4d5d59aeabab840e70ffa5d93cdbf27ba46" Namespace="calico-system" Pod="whisker-6c46bc687f-z2lmj" WorkloadEndpoint="localhost-k8s-whisker--6c46bc687f--z2lmj-" Jan 19 12:10:08.482047 containerd[1618]: 2026-01-19 12:10:06.782 [INFO][4879] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="45540ccd6a28b0ff2c17858c568ad4d5d59aeabab840e70ffa5d93cdbf27ba46" Namespace="calico-system" Pod="whisker-6c46bc687f-z2lmj" WorkloadEndpoint="localhost-k8s-whisker--6c46bc687f--z2lmj-eth0" Jan 19 12:10:08.482047 containerd[1618]: 2026-01-19 12:10:07.267 [INFO][4923] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="45540ccd6a28b0ff2c17858c568ad4d5d59aeabab840e70ffa5d93cdbf27ba46" HandleID="k8s-pod-network.45540ccd6a28b0ff2c17858c568ad4d5d59aeabab840e70ffa5d93cdbf27ba46" Workload="localhost-k8s-whisker--6c46bc687f--z2lmj-eth0" Jan 19 12:10:08.483610 containerd[1618]: 2026-01-19 12:10:07.274 [INFO][4923] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="45540ccd6a28b0ff2c17858c568ad4d5d59aeabab840e70ffa5d93cdbf27ba46" HandleID="k8s-pod-network.45540ccd6a28b0ff2c17858c568ad4d5d59aeabab840e70ffa5d93cdbf27ba46" Workload="localhost-k8s-whisker--6c46bc687f--z2lmj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000391960), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6c46bc687f-z2lmj", "timestamp":"2026-01-19 12:10:07.267931077 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 19 12:10:08.483610 containerd[1618]: 2026-01-19 12:10:07.274 [INFO][4923] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 19 12:10:08.483610 containerd[1618]: 2026-01-19 12:10:07.274 [INFO][4923] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 19 12:10:08.483610 containerd[1618]: 2026-01-19 12:10:07.274 [INFO][4923] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 19 12:10:08.483610 containerd[1618]: 2026-01-19 12:10:07.481 [INFO][4923] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.45540ccd6a28b0ff2c17858c568ad4d5d59aeabab840e70ffa5d93cdbf27ba46" host="localhost" Jan 19 12:10:08.483610 containerd[1618]: 2026-01-19 12:10:07.599 [INFO][4923] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 19 12:10:08.483610 containerd[1618]: 2026-01-19 12:10:07.712 [INFO][4923] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 19 12:10:08.483610 containerd[1618]: 2026-01-19 12:10:07.760 [INFO][4923] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 19 12:10:08.483610 containerd[1618]: 2026-01-19 12:10:07.840 [INFO][4923] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 19 12:10:08.483610 containerd[1618]: 2026-01-19 12:10:07.840 [INFO][4923] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.45540ccd6a28b0ff2c17858c568ad4d5d59aeabab840e70ffa5d93cdbf27ba46" host="localhost" Jan 19 12:10:08.489696 containerd[1618]: 2026-01-19 12:10:07.935 [INFO][4923] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.45540ccd6a28b0ff2c17858c568ad4d5d59aeabab840e70ffa5d93cdbf27ba46 Jan 19 12:10:08.489696 containerd[1618]: 2026-01-19 12:10:08.010 [INFO][4923] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.45540ccd6a28b0ff2c17858c568ad4d5d59aeabab840e70ffa5d93cdbf27ba46" host="localhost" Jan 19 12:10:08.489696 containerd[1618]: 2026-01-19 12:10:08.084 [INFO][4923] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.45540ccd6a28b0ff2c17858c568ad4d5d59aeabab840e70ffa5d93cdbf27ba46" host="localhost" Jan 19 12:10:08.489696 containerd[1618]: 2026-01-19 12:10:08.084 [INFO][4923] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.45540ccd6a28b0ff2c17858c568ad4d5d59aeabab840e70ffa5d93cdbf27ba46" host="localhost" Jan 19 12:10:08.489696 containerd[1618]: 2026-01-19 12:10:08.084 [INFO][4923] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 19 12:10:08.489696 containerd[1618]: 2026-01-19 12:10:08.084 [INFO][4923] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="45540ccd6a28b0ff2c17858c568ad4d5d59aeabab840e70ffa5d93cdbf27ba46" HandleID="k8s-pod-network.45540ccd6a28b0ff2c17858c568ad4d5d59aeabab840e70ffa5d93cdbf27ba46" Workload="localhost-k8s-whisker--6c46bc687f--z2lmj-eth0" Jan 19 12:10:08.489852 containerd[1618]: 2026-01-19 12:10:08.286 [INFO][4879] cni-plugin/k8s.go 418: Populated endpoint ContainerID="45540ccd6a28b0ff2c17858c568ad4d5d59aeabab840e70ffa5d93cdbf27ba46" Namespace="calico-system" Pod="whisker-6c46bc687f-z2lmj" WorkloadEndpoint="localhost-k8s-whisker--6c46bc687f--z2lmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c46bc687f--z2lmj-eth0", GenerateName:"whisker-6c46bc687f-", Namespace:"calico-system", SelfLink:"", UID:"acf44a01-9bd4-43fa-8dda-bec90148f2fd", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 10, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c46bc687f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6c46bc687f-z2lmj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5fe2da2ed88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:10:08.489852 containerd[1618]: 2026-01-19 12:10:08.286 [INFO][4879] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="45540ccd6a28b0ff2c17858c568ad4d5d59aeabab840e70ffa5d93cdbf27ba46" Namespace="calico-system" Pod="whisker-6c46bc687f-z2lmj" WorkloadEndpoint="localhost-k8s-whisker--6c46bc687f--z2lmj-eth0" Jan 19 12:10:08.490898 containerd[1618]: 2026-01-19 12:10:08.286 [INFO][4879] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5fe2da2ed88 ContainerID="45540ccd6a28b0ff2c17858c568ad4d5d59aeabab840e70ffa5d93cdbf27ba46" Namespace="calico-system" Pod="whisker-6c46bc687f-z2lmj" WorkloadEndpoint="localhost-k8s-whisker--6c46bc687f--z2lmj-eth0" Jan 19 12:10:08.490898 containerd[1618]: 2026-01-19 12:10:08.352 [INFO][4879] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="45540ccd6a28b0ff2c17858c568ad4d5d59aeabab840e70ffa5d93cdbf27ba46" Namespace="calico-system" Pod="whisker-6c46bc687f-z2lmj" WorkloadEndpoint="localhost-k8s-whisker--6c46bc687f--z2lmj-eth0" Jan 19 12:10:08.490960 containerd[1618]: 2026-01-19 12:10:08.361 [INFO][4879] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="45540ccd6a28b0ff2c17858c568ad4d5d59aeabab840e70ffa5d93cdbf27ba46" Namespace="calico-system" Pod="whisker-6c46bc687f-z2lmj" WorkloadEndpoint="localhost-k8s-whisker--6c46bc687f--z2lmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c46bc687f--z2lmj-eth0", GenerateName:"whisker-6c46bc687f-", Namespace:"calico-system", SelfLink:"", UID:"acf44a01-9bd4-43fa-8dda-bec90148f2fd", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 10, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c46bc687f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"45540ccd6a28b0ff2c17858c568ad4d5d59aeabab840e70ffa5d93cdbf27ba46", Pod:"whisker-6c46bc687f-z2lmj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5fe2da2ed88", MAC:"b2:b6:0a:ee:ca:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:10:08.491797 containerd[1618]: 2026-01-19 12:10:08.453 [INFO][4879] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="45540ccd6a28b0ff2c17858c568ad4d5d59aeabab840e70ffa5d93cdbf27ba46" Namespace="calico-system" Pod="whisker-6c46bc687f-z2lmj" WorkloadEndpoint="localhost-k8s-whisker--6c46bc687f--z2lmj-eth0" Jan 19 12:10:08.832598 containerd[1618]: time="2026-01-19T12:10:08.798990721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595df97b5c-6lb2q,Uid:6b9860e0-5e66-444d-bc4f-e74b59e19721,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5feb8f99efb1ae742f94651893f3fc1d0aff7355a4a84f4f8c2be36350f8c663\"" Jan 19 12:10:08.816907 sshd-session[4982]: pam_unix(sshd:session): session closed for user core Jan 19 12:10:08.832941 sshd[5015]: Connection closed by 10.0.0.1 port 39506 Jan 19 12:10:08.845957 containerd[1618]: time="2026-01-19T12:10:08.845924015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 19 12:10:08.949000 audit[4982]: USER_END pid=4982 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:08.950000 audit[4982]: CRED_DISP pid=4982 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:08.957897 systemd[1]: sshd@8-10.0.0.55:22-10.0.0.1:39506.service: Deactivated successfully. Jan 19 12:10:08.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.55:22-10.0.0.1:39506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:08.967983 systemd[1]: session-10.scope: Deactivated successfully. Jan 19 12:10:08.990007 systemd-logind[1598]: Session 10 logged out. Waiting for processes to exit. Jan 19 12:10:09.004630 systemd-logind[1598]: Removed session 10. Jan 19 12:10:09.066614 containerd[1618]: time="2026-01-19T12:10:09.066576758Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:10:09.102023 containerd[1618]: time="2026-01-19T12:10:09.097934308Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 19 12:10:09.102023 containerd[1618]: time="2026-01-19T12:10:09.099081561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 19 12:10:09.104065 kubelet[2862]: E0119 12:10:09.103866 2862 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:10:09.105572 kubelet[2862]: E0119 12:10:09.104076 2862 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:10:09.105572 kubelet[2862]: E0119 12:10:09.104815 2862 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-595df97b5c-6lb2q_calico-apiserver(6b9860e0-5e66-444d-bc4f-e74b59e19721): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 19 12:10:09.105572 kubelet[2862]: E0119 12:10:09.104848 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-595df97b5c-6lb2q" podUID="6b9860e0-5e66-444d-bc4f-e74b59e19721" Jan 19 12:10:09.155054 containerd[1618]: time="2026-01-19T12:10:09.153598727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sh4c8,Uid:fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5,Namespace:calico-system,Attempt:0,} returns sandbox id \"27bdae538b49a09b793d741a01c838a4078eea110a699ab94ea94f0cfa620c33\"" Jan 19 12:10:09.185706 containerd[1618]: time="2026-01-19T12:10:09.184750397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 19 12:10:09.253606 containerd[1618]: time="2026-01-19T12:10:09.251875603Z" level=info msg="connecting to shim 45540ccd6a28b0ff2c17858c568ad4d5d59aeabab840e70ffa5d93cdbf27ba46" address="unix:///run/containerd/s/5b019c4157e2c04aa484356ea263c66187ed910786686261d6790997a6d6fb14" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:10:09.379885 containerd[1618]: time="2026-01-19T12:10:09.377869908Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:10:09.392826 kubelet[2862]: E0119 12:10:09.387593 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-595df97b5c-6lb2q" podUID="6b9860e0-5e66-444d-bc4f-e74b59e19721" Jan 19 12:10:09.424626 containerd[1618]: time="2026-01-19T12:10:09.420082187Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 19 12:10:09.424626 containerd[1618]: time="2026-01-19T12:10:09.420741449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 19 12:10:09.424791 kubelet[2862]: E0119 12:10:09.421031 2862 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 19 12:10:09.424791 kubelet[2862]: E0119 12:10:09.421066 2862 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 19 12:10:09.432082 kubelet[2862]: E0119 12:10:09.429547 2862 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-sh4c8_calico-system(fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 19 12:10:09.462731 containerd[1618]: time="2026-01-19T12:10:09.453957337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 19 12:10:09.562902 systemd-networkd[1519]: cali959cae999ea: Link UP Jan 19 12:10:09.565051 systemd-networkd[1519]: cali959cae999ea: Gained carrier Jan 19 12:10:09.579691 systemd-networkd[1519]: cali5fe2da2ed88: Gained IPv6LL Jan 19 12:10:09.665468 containerd[1618]: 2026-01-19 12:10:07.958 [INFO][4978] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 19 12:10:09.665468 containerd[1618]: 2026-01-19 12:10:08.124 [INFO][4978] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--975cd56bc--wkczn-eth0 calico-kube-controllers-975cd56bc- calico-system 75367eb7-d5a3-4610-be00-cbd5e7d7db9d 1135 0 2026-01-19 12:08:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:975cd56bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-975cd56bc-wkczn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali959cae999ea [] [] }} ContainerID="d586790c7eeedd7ad3a4e56d624e763585f95bb8b363e086cce28e167a83bb02" Namespace="calico-system" Pod="calico-kube-controllers-975cd56bc-wkczn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--975cd56bc--wkczn-" Jan 19 12:10:09.665468 containerd[1618]: 2026-01-19 12:10:08.125 [INFO][4978] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d586790c7eeedd7ad3a4e56d624e763585f95bb8b363e086cce28e167a83bb02" Namespace="calico-system" Pod="calico-kube-controllers-975cd56bc-wkczn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--975cd56bc--wkczn-eth0" Jan 19 12:10:09.665468 containerd[1618]: 2026-01-19 12:10:08.802 [INFO][5021] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d586790c7eeedd7ad3a4e56d624e763585f95bb8b363e086cce28e167a83bb02" HandleID="k8s-pod-network.d586790c7eeedd7ad3a4e56d624e763585f95bb8b363e086cce28e167a83bb02" Workload="localhost-k8s-calico--kube--controllers--975cd56bc--wkczn-eth0" Jan 19 12:10:09.672613 containerd[1618]: 2026-01-19 12:10:08.831 [INFO][5021] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d586790c7eeedd7ad3a4e56d624e763585f95bb8b363e086cce28e167a83bb02" HandleID="k8s-pod-network.d586790c7eeedd7ad3a4e56d624e763585f95bb8b363e086cce28e167a83bb02" Workload="localhost-k8s-calico--kube--controllers--975cd56bc--wkczn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000389450), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-975cd56bc-wkczn", "timestamp":"2026-01-19 12:10:08.802038043 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 19 12:10:09.672613 containerd[1618]: 2026-01-19 12:10:08.832 [INFO][5021] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 19 12:10:09.672613 containerd[1618]: 2026-01-19 12:10:08.844 [INFO][5021] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 19 12:10:09.672613 containerd[1618]: 2026-01-19 12:10:08.845 [INFO][5021] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 19 12:10:09.672613 containerd[1618]: 2026-01-19 12:10:09.008 [INFO][5021] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d586790c7eeedd7ad3a4e56d624e763585f95bb8b363e086cce28e167a83bb02" host="localhost" Jan 19 12:10:09.672613 containerd[1618]: 2026-01-19 12:10:09.051 [INFO][5021] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 19 12:10:09.672613 containerd[1618]: 2026-01-19 12:10:09.159 [INFO][5021] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 19 12:10:09.672613 containerd[1618]: 2026-01-19 12:10:09.253 [INFO][5021] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 19 12:10:09.672613 containerd[1618]: 2026-01-19 12:10:09.281 [INFO][5021] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 19 12:10:09.672613 containerd[1618]: 2026-01-19 12:10:09.292 [INFO][5021] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d586790c7eeedd7ad3a4e56d624e763585f95bb8b363e086cce28e167a83bb02" host="localhost" Jan 19 12:10:09.673062 containerd[1618]: 2026-01-19 12:10:09.315 [INFO][5021] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d586790c7eeedd7ad3a4e56d624e763585f95bb8b363e086cce28e167a83bb02 Jan 19 12:10:09.673062 containerd[1618]: 2026-01-19 12:10:09.372 [INFO][5021] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d586790c7eeedd7ad3a4e56d624e763585f95bb8b363e086cce28e167a83bb02" host="localhost" Jan 19 12:10:09.673062 containerd[1618]: 2026-01-19 12:10:09.472 [INFO][5021] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d586790c7eeedd7ad3a4e56d624e763585f95bb8b363e086cce28e167a83bb02" host="localhost" Jan 19 12:10:09.673062 containerd[1618]: 2026-01-19 12:10:09.485 [INFO][5021] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d586790c7eeedd7ad3a4e56d624e763585f95bb8b363e086cce28e167a83bb02" host="localhost" Jan 19 12:10:09.673062 containerd[1618]: 2026-01-19 12:10:09.485 [INFO][5021] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 19 12:10:09.673062 containerd[1618]: 2026-01-19 12:10:09.485 [INFO][5021] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d586790c7eeedd7ad3a4e56d624e763585f95bb8b363e086cce28e167a83bb02" HandleID="k8s-pod-network.d586790c7eeedd7ad3a4e56d624e763585f95bb8b363e086cce28e167a83bb02" Workload="localhost-k8s-calico--kube--controllers--975cd56bc--wkczn-eth0" Jan 19 12:10:09.676944 containerd[1618]: 2026-01-19 12:10:09.553 [INFO][4978] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d586790c7eeedd7ad3a4e56d624e763585f95bb8b363e086cce28e167a83bb02" Namespace="calico-system" Pod="calico-kube-controllers-975cd56bc-wkczn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--975cd56bc--wkczn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--975cd56bc--wkczn-eth0", GenerateName:"calico-kube-controllers-975cd56bc-", Namespace:"calico-system", SelfLink:"", UID:"75367eb7-d5a3-4610-be00-cbd5e7d7db9d", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 8, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"975cd56bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-975cd56bc-wkczn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali959cae999ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:10:09.677888 containerd[1618]: 2026-01-19 12:10:09.553 [INFO][4978] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d586790c7eeedd7ad3a4e56d624e763585f95bb8b363e086cce28e167a83bb02" Namespace="calico-system" Pod="calico-kube-controllers-975cd56bc-wkczn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--975cd56bc--wkczn-eth0" Jan 19 12:10:09.677888 containerd[1618]: 2026-01-19 12:10:09.553 [INFO][4978] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali959cae999ea ContainerID="d586790c7eeedd7ad3a4e56d624e763585f95bb8b363e086cce28e167a83bb02" Namespace="calico-system" Pod="calico-kube-controllers-975cd56bc-wkczn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--975cd56bc--wkczn-eth0" Jan 19 12:10:09.677888 containerd[1618]: 2026-01-19 12:10:09.566 [INFO][4978] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d586790c7eeedd7ad3a4e56d624e763585f95bb8b363e086cce28e167a83bb02" Namespace="calico-system" Pod="calico-kube-controllers-975cd56bc-wkczn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--975cd56bc--wkczn-eth0" Jan 19 12:10:09.683468 containerd[1618]: 2026-01-19 12:10:09.575 [INFO][4978] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d586790c7eeedd7ad3a4e56d624e763585f95bb8b363e086cce28e167a83bb02" Namespace="calico-system" Pod="calico-kube-controllers-975cd56bc-wkczn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--975cd56bc--wkczn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--975cd56bc--wkczn-eth0", GenerateName:"calico-kube-controllers-975cd56bc-", Namespace:"calico-system", SelfLink:"", UID:"75367eb7-d5a3-4610-be00-cbd5e7d7db9d", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 8, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"975cd56bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d586790c7eeedd7ad3a4e56d624e763585f95bb8b363e086cce28e167a83bb02", Pod:"calico-kube-controllers-975cd56bc-wkczn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali959cae999ea", MAC:"ca:c8:0c:8b:ed:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:10:09.683736 kubelet[2862]: E0119 12:10:09.679539 2862 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 19 12:10:09.683736 kubelet[2862]: E0119 12:10:09.679588 2862 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 19 12:10:09.683736 kubelet[2862]: E0119 12:10:09.679674 2862 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-sh4c8_calico-system(fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 19 12:10:09.683736 kubelet[2862]: E0119 12:10:09.679733 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sh4c8" podUID="fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5" Jan 19 12:10:09.684049 containerd[1618]: 2026-01-19 12:10:09.637 [INFO][4978] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d586790c7eeedd7ad3a4e56d624e763585f95bb8b363e086cce28e167a83bb02" Namespace="calico-system" Pod="calico-kube-controllers-975cd56bc-wkczn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--975cd56bc--wkczn-eth0" Jan 19 12:10:09.684049 containerd[1618]: time="2026-01-19T12:10:09.667794738Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:10:09.684049 containerd[1618]: time="2026-01-19T12:10:09.678671873Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 19 12:10:09.684049 containerd[1618]: time="2026-01-19T12:10:09.678754828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 19 12:10:09.730000 audit[5161]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=5161 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:10:09.730000 audit[5161]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe9d7a9610 a2=0 a3=7ffe9d7a95fc items=0 ppid=3029 pid=5161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:09.730000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:10:09.739000 audit[5161]: NETFILTER_CFG table=nat:120 family=2 entries=14 op=nft_register_rule pid=5161 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:10:09.739000 audit[5161]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe9d7a9610 a2=0 a3=0 items=0 ppid=3029 pid=5161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:09.739000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:10:09.833825 systemd[1]: Started cri-containerd-45540ccd6a28b0ff2c17858c568ad4d5d59aeabab840e70ffa5d93cdbf27ba46.scope - libcontainer container 45540ccd6a28b0ff2c17858c568ad4d5d59aeabab840e70ffa5d93cdbf27ba46. Jan 19 12:10:09.946902 containerd[1618]: time="2026-01-19T12:10:09.946673343Z" level=info msg="connecting to shim d586790c7eeedd7ad3a4e56d624e763585f95bb8b363e086cce28e167a83bb02" address="unix:///run/containerd/s/58ae4f0b3ee6ea9c739e78dd739f638da4e40a5c9a406a992a3a2c227315cb62" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:10:10.043987 systemd[1]: Started cri-containerd-d586790c7eeedd7ad3a4e56d624e763585f95bb8b363e086cce28e167a83bb02.scope - libcontainer container d586790c7eeedd7ad3a4e56d624e763585f95bb8b363e086cce28e167a83bb02. Jan 19 12:10:10.067496 systemd-networkd[1519]: caliac0daf977ad: Link UP Jan 19 12:10:10.071302 systemd-networkd[1519]: caliac0daf977ad: Gained carrier Jan 19 12:10:10.112672 containerd[1618]: 2026-01-19 12:10:09.025 [INFO][5067] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 19 12:10:10.112672 containerd[1618]: 2026-01-19 12:10:09.181 [INFO][5067] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--9qp6h-eth0 goldmane-7c778bb748- calico-system c862d411-4d4f-4a97-b967-49e1eb15851d 946 0 2026-01-19 12:08:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-9qp6h eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliac0daf977ad [] [] }} ContainerID="b4ba5e062f59059003f95d3db8caface12762cf92d6ff6fa42d47f7b3cc17c29" Namespace="calico-system" Pod="goldmane-7c778bb748-9qp6h" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--9qp6h-" Jan 19 12:10:10.112672 containerd[1618]: 2026-01-19 12:10:09.181 [INFO][5067] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4ba5e062f59059003f95d3db8caface12762cf92d6ff6fa42d47f7b3cc17c29" Namespace="calico-system" Pod="goldmane-7c778bb748-9qp6h" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--9qp6h-eth0" Jan 19 12:10:10.112672 containerd[1618]: 2026-01-19 12:10:09.731 [INFO][5133] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4ba5e062f59059003f95d3db8caface12762cf92d6ff6fa42d47f7b3cc17c29" HandleID="k8s-pod-network.b4ba5e062f59059003f95d3db8caface12762cf92d6ff6fa42d47f7b3cc17c29" Workload="localhost-k8s-goldmane--7c778bb748--9qp6h-eth0" Jan 19 12:10:10.113050 containerd[1618]: 2026-01-19 12:10:09.746 [INFO][5133] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b4ba5e062f59059003f95d3db8caface12762cf92d6ff6fa42d47f7b3cc17c29" HandleID="k8s-pod-network.b4ba5e062f59059003f95d3db8caface12762cf92d6ff6fa42d47f7b3cc17c29" Workload="localhost-k8s-goldmane--7c778bb748--9qp6h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004eb30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-9qp6h", "timestamp":"2026-01-19 12:10:09.731877756 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 19 12:10:10.113050 containerd[1618]: 2026-01-19 12:10:09.746 [INFO][5133] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 19 12:10:10.113050 containerd[1618]: 2026-01-19 12:10:09.746 [INFO][5133] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 19 12:10:10.113050 containerd[1618]: 2026-01-19 12:10:09.747 [INFO][5133] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 19 12:10:10.113050 containerd[1618]: 2026-01-19 12:10:09.788 [INFO][5133] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b4ba5e062f59059003f95d3db8caface12762cf92d6ff6fa42d47f7b3cc17c29" host="localhost" Jan 19 12:10:10.113050 containerd[1618]: 2026-01-19 12:10:09.884 [INFO][5133] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 19 12:10:10.113050 containerd[1618]: 2026-01-19 12:10:09.968 [INFO][5133] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 19 12:10:10.113050 containerd[1618]: 2026-01-19 12:10:09.977 [INFO][5133] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 19 12:10:10.113050 containerd[1618]: 2026-01-19 12:10:09.997 [INFO][5133] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 19 12:10:10.113050 containerd[1618]: 2026-01-19 12:10:09.997 [INFO][5133] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b4ba5e062f59059003f95d3db8caface12762cf92d6ff6fa42d47f7b3cc17c29" host="localhost" Jan 19 12:10:10.114633 containerd[1618]: 2026-01-19 12:10:10.010 [INFO][5133] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b4ba5e062f59059003f95d3db8caface12762cf92d6ff6fa42d47f7b3cc17c29 Jan 19 12:10:10.114633 containerd[1618]: 2026-01-19 12:10:10.026 [INFO][5133] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b4ba5e062f59059003f95d3db8caface12762cf92d6ff6fa42d47f7b3cc17c29" host="localhost" Jan 19 12:10:10.114633 containerd[1618]: 2026-01-19 12:10:10.046 [INFO][5133] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b4ba5e062f59059003f95d3db8caface12762cf92d6ff6fa42d47f7b3cc17c29" host="localhost" Jan 19 12:10:10.114633 containerd[1618]: 2026-01-19 12:10:10.048 [INFO][5133] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b4ba5e062f59059003f95d3db8caface12762cf92d6ff6fa42d47f7b3cc17c29" host="localhost" Jan 19 12:10:10.114633 containerd[1618]: 2026-01-19 12:10:10.048 [INFO][5133] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 19 12:10:10.114633 containerd[1618]: 2026-01-19 12:10:10.048 [INFO][5133] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b4ba5e062f59059003f95d3db8caface12762cf92d6ff6fa42d47f7b3cc17c29" HandleID="k8s-pod-network.b4ba5e062f59059003f95d3db8caface12762cf92d6ff6fa42d47f7b3cc17c29" Workload="localhost-k8s-goldmane--7c778bb748--9qp6h-eth0" Jan 19 12:10:10.115050 containerd[1618]: 2026-01-19 12:10:10.056 [INFO][5067] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4ba5e062f59059003f95d3db8caface12762cf92d6ff6fa42d47f7b3cc17c29" Namespace="calico-system" Pod="goldmane-7c778bb748-9qp6h" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--9qp6h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--9qp6h-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"c862d411-4d4f-4a97-b967-49e1eb15851d", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 8, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-9qp6h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliac0daf977ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:10:10.115050 containerd[1618]: 2026-01-19 12:10:10.056 [INFO][5067] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b4ba5e062f59059003f95d3db8caface12762cf92d6ff6fa42d47f7b3cc17c29" Namespace="calico-system" Pod="goldmane-7c778bb748-9qp6h" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--9qp6h-eth0" Jan 19 12:10:10.118306 containerd[1618]: 2026-01-19 12:10:10.056 [INFO][5067] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac0daf977ad ContainerID="b4ba5e062f59059003f95d3db8caface12762cf92d6ff6fa42d47f7b3cc17c29" Namespace="calico-system" Pod="goldmane-7c778bb748-9qp6h" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--9qp6h-eth0" Jan 19 12:10:10.118306 containerd[1618]: 2026-01-19 12:10:10.072 [INFO][5067] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4ba5e062f59059003f95d3db8caface12762cf92d6ff6fa42d47f7b3cc17c29" Namespace="calico-system" Pod="goldmane-7c778bb748-9qp6h" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--9qp6h-eth0" Jan 19 12:10:10.118450 containerd[1618]: 2026-01-19 12:10:10.075 [INFO][5067] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4ba5e062f59059003f95d3db8caface12762cf92d6ff6fa42d47f7b3cc17c29" Namespace="calico-system" Pod="goldmane-7c778bb748-9qp6h" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--9qp6h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--9qp6h-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"c862d411-4d4f-4a97-b967-49e1eb15851d", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 8, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b4ba5e062f59059003f95d3db8caface12762cf92d6ff6fa42d47f7b3cc17c29", Pod:"goldmane-7c778bb748-9qp6h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliac0daf977ad", MAC:"a2:97:0a:4b:35:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:10:10.118664 containerd[1618]: 2026-01-19 12:10:10.099 [INFO][5067] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4ba5e062f59059003f95d3db8caface12762cf92d6ff6fa42d47f7b3cc17c29" Namespace="calico-system" Pod="goldmane-7c778bb748-9qp6h" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--9qp6h-eth0" Jan 19 12:10:10.151000 audit: BPF prog-id=185 op=LOAD Jan 19 12:10:10.153000 audit: BPF prog-id=186 op=LOAD Jan 19 12:10:10.153000 audit[5152]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5128 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435353430636364366132386230666632633137383538633536386164 Jan 19 12:10:10.156000 audit: BPF prog-id=186 op=UNLOAD Jan 19 12:10:10.156000 audit[5152]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5128 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435353430636364366132386230666632633137383538633536386164 Jan 19 12:10:10.161000 audit: BPF prog-id=187 op=LOAD Jan 19 12:10:10.161000 audit[5152]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5128 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435353430636364366132386230666632633137383538633536386164 Jan 19 12:10:10.162000 audit: BPF prog-id=188 op=LOAD Jan 19 12:10:10.162000 audit[5152]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=5128 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435353430636364366132386230666632633137383538633536386164 Jan 19 12:10:10.162000 audit: BPF prog-id=188 op=UNLOAD Jan 19 12:10:10.162000 audit[5152]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5128 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435353430636364366132386230666632633137383538633536386164 Jan 19 12:10:10.162000 audit: BPF prog-id=187 op=UNLOAD Jan 19 12:10:10.162000 audit[5152]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5128 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435353430636364366132386230666632633137383538633536386164 Jan 19 12:10:10.162000 audit: BPF prog-id=189 op=LOAD Jan 19 12:10:10.162000 audit[5152]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=5128 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435353430636364366132386230666632633137383538633536386164 Jan 19 12:10:10.169291 systemd-resolved[1294]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 19 12:10:10.179000 audit: BPF prog-id=190 op=LOAD Jan 19 12:10:10.182000 audit: BPF prog-id=191 op=LOAD Jan 19 12:10:10.182000 audit[5237]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=5210 pid=5237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435383637393063376565656464376164336134653536643632346537 Jan 19 12:10:10.184000 audit: BPF prog-id=191 op=UNLOAD Jan 19 12:10:10.184000 audit[5237]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5210 pid=5237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435383637393063376565656464376164336134653536643632346537 Jan 19 12:10:10.186000 audit: BPF prog-id=192 op=LOAD Jan 19 12:10:10.186000 audit[5237]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=5210 pid=5237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435383637393063376565656464376164336134653536643632346537 Jan 19 12:10:10.190000 audit: BPF prog-id=193 op=LOAD Jan 19 12:10:10.190000 audit[5237]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=5210 pid=5237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.190000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435383637393063376565656464376164336134653536643632346537 Jan 19 12:10:10.191000 audit: BPF prog-id=193 op=UNLOAD Jan 19 12:10:10.191000 audit[5237]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5210 pid=5237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.191000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435383637393063376565656464376164336134653536643632346537 Jan 19 12:10:10.191000 audit: BPF prog-id=192 op=UNLOAD Jan 19 12:10:10.191000 audit[5237]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5210 pid=5237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.191000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435383637393063376565656464376164336134653536643632346537 Jan 19 12:10:10.191000 audit: BPF prog-id=194 op=LOAD Jan 19 12:10:10.191000 audit[5237]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=5210 pid=5237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.191000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435383637393063376565656464376164336134653536643632346537 Jan 19 12:10:10.225789 systemd-resolved[1294]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 19 12:10:10.273882 containerd[1618]: time="2026-01-19T12:10:10.273641240Z" level=info msg="connecting to shim b4ba5e062f59059003f95d3db8caface12762cf92d6ff6fa42d47f7b3cc17c29" address="unix:///run/containerd/s/328ccbd6ef60c56e8cb136904ecebc41bc0f31bf2422a7dad2482b8070645b7e" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:10:10.450022 kubelet[2862]: E0119 12:10:10.449701 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-595df97b5c-6lb2q" podUID="6b9860e0-5e66-444d-bc4f-e74b59e19721" Jan 19 12:10:10.460256 kubelet[2862]: E0119 12:10:10.459839 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sh4c8" podUID="fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5" Jan 19 12:10:10.462771 systemd[1]: Started cri-containerd-b4ba5e062f59059003f95d3db8caface12762cf92d6ff6fa42d47f7b3cc17c29.scope - libcontainer container b4ba5e062f59059003f95d3db8caface12762cf92d6ff6fa42d47f7b3cc17c29. Jan 19 12:10:10.570536 containerd[1618]: time="2026-01-19T12:10:10.570485251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-975cd56bc-wkczn,Uid:75367eb7-d5a3-4610-be00-cbd5e7d7db9d,Namespace:calico-system,Attempt:0,} returns sandbox id \"d586790c7eeedd7ad3a4e56d624e763585f95bb8b363e086cce28e167a83bb02\"" Jan 19 12:10:10.574649 containerd[1618]: time="2026-01-19T12:10:10.574281602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 19 12:10:10.606560 containerd[1618]: time="2026-01-19T12:10:10.606012273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c46bc687f-z2lmj,Uid:acf44a01-9bd4-43fa-8dda-bec90148f2fd,Namespace:calico-system,Attempt:0,} returns sandbox id \"45540ccd6a28b0ff2c17858c568ad4d5d59aeabab840e70ffa5d93cdbf27ba46\"" Jan 19 12:10:10.612000 audit: BPF prog-id=195 op=LOAD Jan 19 12:10:10.617000 audit: BPF prog-id=196 op=LOAD Jan 19 12:10:10.617000 audit[5292]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5281 pid=5292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.617000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234626135653036326635393035393030336639356433646238636166 Jan 19 12:10:10.617000 audit: BPF prog-id=196 op=UNLOAD Jan 19 12:10:10.617000 audit[5292]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5281 pid=5292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.617000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234626135653036326635393035393030336639356433646238636166 Jan 19 12:10:10.618000 audit: BPF prog-id=197 op=LOAD Jan 19 12:10:10.618000 audit[5292]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5281 pid=5292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234626135653036326635393035393030336639356433646238636166 Jan 19 12:10:10.618000 audit: BPF prog-id=198 op=LOAD Jan 19 12:10:10.618000 audit[5292]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5281 pid=5292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234626135653036326635393035393030336639356433646238636166 Jan 19 12:10:10.618000 audit: BPF prog-id=198 op=UNLOAD Jan 19 12:10:10.618000 audit[5292]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5281 pid=5292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234626135653036326635393035393030336639356433646238636166 Jan 19 12:10:10.618000 audit: BPF prog-id=197 op=UNLOAD Jan 19 12:10:10.618000 audit[5292]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5281 pid=5292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234626135653036326635393035393030336639356433646238636166 Jan 19 12:10:10.618000 audit: BPF prog-id=199 op=LOAD Jan 19 12:10:10.618000 audit[5292]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5281 pid=5292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234626135653036326635393035393030336639356433646238636166 Jan 19 12:10:10.626073 systemd-resolved[1294]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 19 12:10:10.658769 containerd[1618]: time="2026-01-19T12:10:10.658617887Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:10:10.661860 containerd[1618]: time="2026-01-19T12:10:10.661484821Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 19 12:10:10.661860 containerd[1618]: time="2026-01-19T12:10:10.661557026Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 19 12:10:10.661954 kubelet[2862]: E0119 12:10:10.661707 2862 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 19 12:10:10.661954 kubelet[2862]: E0119 12:10:10.661744 2862 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 19 12:10:10.661954 kubelet[2862]: E0119 12:10:10.661885 2862 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-975cd56bc-wkczn_calico-system(75367eb7-d5a3-4610-be00-cbd5e7d7db9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 19 12:10:10.661954 kubelet[2862]: E0119 12:10:10.661914 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-975cd56bc-wkczn" podUID="75367eb7-d5a3-4610-be00-cbd5e7d7db9d" Jan 19 12:10:10.662744 containerd[1618]: time="2026-01-19T12:10:10.662725083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 19 12:10:10.730509 systemd-networkd[1519]: cali959cae999ea: Gained IPv6LL Jan 19 12:10:10.751806 containerd[1618]: time="2026-01-19T12:10:10.751437577Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:10:10.757495 containerd[1618]: time="2026-01-19T12:10:10.756532784Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 19 12:10:10.757495 containerd[1618]: time="2026-01-19T12:10:10.757014002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 19 12:10:10.760460 kubelet[2862]: E0119 12:10:10.758898 2862 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 19 12:10:10.760460 kubelet[2862]: E0119 12:10:10.758940 2862 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 19 12:10:10.760460 kubelet[2862]: E0119 12:10:10.759436 2862 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6c46bc687f-z2lmj_calico-system(acf44a01-9bd4-43fa-8dda-bec90148f2fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 19 12:10:10.764585 containerd[1618]: time="2026-01-19T12:10:10.763568073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 19 12:10:10.801712 containerd[1618]: time="2026-01-19T12:10:10.800032110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-9qp6h,Uid:c862d411-4d4f-4a97-b967-49e1eb15851d,Namespace:calico-system,Attempt:0,} returns sandbox id \"b4ba5e062f59059003f95d3db8caface12762cf92d6ff6fa42d47f7b3cc17c29\"" Jan 19 12:10:10.903000 audit: BPF prog-id=200 op=LOAD Jan 19 12:10:10.903000 audit[5365]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe10534730 a2=98 a3=1fffffffffffffff items=0 ppid=5057 pid=5365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.903000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 19 12:10:10.904000 audit: BPF prog-id=200 op=UNLOAD Jan 19 12:10:10.904000 audit[5365]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe10534700 a3=0 items=0 ppid=5057 pid=5365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.904000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 19 12:10:10.904000 audit: BPF prog-id=201 op=LOAD Jan 19 12:10:10.904000 audit[5365]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe10534610 a2=94 a3=3 items=0 ppid=5057 pid=5365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.904000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 19 12:10:10.905000 audit: BPF prog-id=201 op=UNLOAD Jan 19 12:10:10.905000 audit[5365]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe10534610 a2=94 a3=3 items=0 ppid=5057 pid=5365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.905000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 19 12:10:10.905000 audit: BPF prog-id=202 op=LOAD Jan 19 12:10:10.905000 audit[5365]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe10534650 a2=94 a3=7ffe10534830 items=0 ppid=5057 pid=5365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.905000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 19 12:10:10.905000 audit: BPF prog-id=202 op=UNLOAD Jan 19 12:10:10.905000 audit[5365]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe10534650 a2=94 a3=7ffe10534830 items=0 ppid=5057 pid=5365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.905000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 19 12:10:10.914000 audit: BPF prog-id=203 op=LOAD Jan 19 12:10:10.914000 audit[5366]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff1e97560 a2=98 a3=3 items=0 ppid=5057 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.914000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:10:10.914000 audit: BPF prog-id=203 op=UNLOAD Jan 19 12:10:10.914000 audit[5366]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffff1e97530 a3=0 items=0 ppid=5057 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.914000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:10:10.916000 audit: BPF prog-id=204 op=LOAD Jan 19 12:10:10.916000 audit[5366]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffff1e97350 a2=94 a3=54428f items=0 ppid=5057 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.916000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:10:10.916000 audit: BPF prog-id=204 op=UNLOAD Jan 19 12:10:10.916000 audit[5366]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffff1e97350 a2=94 a3=54428f items=0 ppid=5057 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.916000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:10:10.916000 audit: BPF prog-id=205 op=LOAD Jan 19 12:10:10.916000 audit[5366]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffff1e97380 a2=94 a3=2 items=0 ppid=5057 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.916000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:10:10.916000 audit: BPF prog-id=205 op=UNLOAD Jan 19 12:10:10.916000 audit[5366]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffff1e97380 a2=0 a3=2 items=0 ppid=5057 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:10.916000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:10:10.929690 containerd[1618]: time="2026-01-19T12:10:10.926314311Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:10:10.934990 containerd[1618]: time="2026-01-19T12:10:10.934939269Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 19 12:10:10.935779 containerd[1618]: time="2026-01-19T12:10:10.935656539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 19 12:10:10.936047 kubelet[2862]: E0119 12:10:10.936007 2862 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 19 12:10:10.949407 kubelet[2862]: E0119 12:10:10.938061 2862 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 19 12:10:10.951249 kubelet[2862]: E0119 12:10:10.949500 2862 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6c46bc687f-z2lmj_calico-system(acf44a01-9bd4-43fa-8dda-bec90148f2fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 19 12:10:10.951249 kubelet[2862]: E0119 12:10:10.949566 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c46bc687f-z2lmj" podUID="acf44a01-9bd4-43fa-8dda-bec90148f2fd" Jan 19 12:10:10.951625 containerd[1618]: time="2026-01-19T12:10:10.951011232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 19 12:10:11.034989 containerd[1618]: time="2026-01-19T12:10:11.034909361Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:10:11.037826 containerd[1618]: time="2026-01-19T12:10:11.036961073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 19 12:10:11.037826 containerd[1618]: time="2026-01-19T12:10:11.037053314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 19 12:10:11.039984 kubelet[2862]: E0119 12:10:11.039728 2862 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 19 12:10:11.039984 kubelet[2862]: E0119 12:10:11.039783 2862 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 19 12:10:11.042034 kubelet[2862]: E0119 12:10:11.039868 2862 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-9qp6h_calico-system(c862d411-4d4f-4a97-b967-49e1eb15851d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 19 12:10:11.042034 kubelet[2862]: E0119 12:10:11.041709 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9qp6h" podUID="c862d411-4d4f-4a97-b967-49e1eb15851d" Jan 19 12:10:11.270000 audit: BPF prog-id=206 op=LOAD Jan 19 12:10:11.270000 audit[5366]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffff1e97240 a2=94 a3=1 items=0 ppid=5057 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.270000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:10:11.271000 audit: BPF prog-id=206 op=UNLOAD Jan 19 12:10:11.271000 audit[5366]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffff1e97240 a2=94 a3=1 items=0 ppid=5057 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.271000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:10:11.289000 audit: BPF prog-id=207 op=LOAD Jan 19 12:10:11.289000 audit[5366]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffff1e97230 a2=94 a3=4 items=0 ppid=5057 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.289000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:10:11.289000 audit: BPF prog-id=207 op=UNLOAD Jan 19 12:10:11.289000 audit[5366]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffff1e97230 a2=0 a3=4 items=0 ppid=5057 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.289000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:10:11.289000 audit: BPF prog-id=208 op=LOAD Jan 19 12:10:11.289000 audit[5366]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffff1e97090 a2=94 a3=5 items=0 ppid=5057 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.289000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:10:11.290000 audit: BPF prog-id=208 op=UNLOAD Jan 19 12:10:11.290000 audit[5366]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffff1e97090 a2=0 a3=5 items=0 ppid=5057 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.290000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:10:11.290000 audit: BPF prog-id=209 op=LOAD Jan 19 12:10:11.290000 audit[5366]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffff1e972b0 a2=94 a3=6 items=0 ppid=5057 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.290000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:10:11.290000 audit: BPF prog-id=209 op=UNLOAD Jan 19 12:10:11.290000 audit[5366]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffff1e972b0 a2=0 a3=6 items=0 ppid=5057 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.290000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:10:11.290000 audit: BPF prog-id=210 op=LOAD Jan 19 12:10:11.290000 audit[5366]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffff1e96a60 a2=94 a3=88 items=0 ppid=5057 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.290000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:10:11.291000 audit: BPF prog-id=211 op=LOAD Jan 19 12:10:11.291000 audit[5366]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffff1e968e0 a2=94 a3=2 items=0 ppid=5057 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.291000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:10:11.291000 audit: BPF prog-id=211 op=UNLOAD Jan 19 12:10:11.291000 audit[5366]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffff1e96910 a2=0 a3=7ffff1e96a10 items=0 ppid=5057 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.291000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:10:11.292000 audit: BPF prog-id=210 op=UNLOAD Jan 19 12:10:11.292000 audit[5366]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=25bc2d10 a2=0 a3=97448cca11edf46d items=0 ppid=5057 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.292000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:10:11.344000 audit: BPF prog-id=212 op=LOAD Jan 19 12:10:11.344000 audit[5370]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdf97bece0 a2=98 a3=1999999999999999 items=0 ppid=5057 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.344000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 19 12:10:11.347000 audit: BPF prog-id=212 op=UNLOAD Jan 19 12:10:11.347000 audit[5370]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdf97becb0 a3=0 items=0 ppid=5057 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.347000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 19 12:10:11.347000 audit: BPF prog-id=213 op=LOAD Jan 19 12:10:11.347000 audit[5370]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdf97bebc0 a2=94 a3=ffff items=0 ppid=5057 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.347000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 19 12:10:11.347000 audit: BPF prog-id=213 op=UNLOAD Jan 19 12:10:11.347000 audit[5370]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdf97bebc0 a2=94 a3=ffff items=0 ppid=5057 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.347000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 19 12:10:11.347000 audit: BPF prog-id=214 op=LOAD Jan 19 12:10:11.347000 audit[5370]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdf97bec00 a2=94 a3=7ffdf97bede0 items=0 ppid=5057 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.347000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 19 12:10:11.347000 audit: BPF prog-id=214 op=UNLOAD Jan 19 12:10:11.347000 audit[5370]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdf97bec00 a2=94 a3=7ffdf97bede0 items=0 ppid=5057 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.347000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 19 12:10:11.464961 kubelet[2862]: E0119 12:10:11.464760 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c46bc687f-z2lmj" podUID="acf44a01-9bd4-43fa-8dda-bec90148f2fd" Jan 19 12:10:11.498596 kubelet[2862]: E0119 12:10:11.493577 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9qp6h" podUID="c862d411-4d4f-4a97-b967-49e1eb15851d" Jan 19 12:10:11.517283 kubelet[2862]: E0119 12:10:11.516044 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-975cd56bc-wkczn" podUID="75367eb7-d5a3-4610-be00-cbd5e7d7db9d" Jan 19 12:10:11.549000 audit[5383]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=5383 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:10:11.549000 audit[5383]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc98a0bb40 a2=0 a3=7ffc98a0bb2c items=0 ppid=3029 pid=5383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.549000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:10:11.560000 audit[5383]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=5383 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:10:11.560000 audit[5383]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc98a0bb40 a2=0 a3=0 items=0 ppid=3029 pid=5383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.560000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:10:11.744959 systemd-networkd[1519]: vxlan.calico: Link UP Jan 19 12:10:11.744973 systemd-networkd[1519]: vxlan.calico: Gained carrier Jan 19 12:10:11.819582 systemd-networkd[1519]: caliac0daf977ad: Gained IPv6LL Jan 19 12:10:11.929000 audit: BPF prog-id=215 op=LOAD Jan 19 12:10:11.929000 audit[5403]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd30c08a00 a2=98 a3=0 items=0 ppid=5057 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.929000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:10:11.929000 audit: BPF prog-id=215 op=UNLOAD Jan 19 12:10:11.929000 audit[5403]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd30c089d0 a3=0 items=0 ppid=5057 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.929000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:10:11.930000 audit: BPF prog-id=216 op=LOAD Jan 19 12:10:11.930000 audit[5403]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd30c08810 a2=94 a3=54428f items=0 ppid=5057 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.930000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:10:11.930000 audit: BPF prog-id=216 op=UNLOAD Jan 19 12:10:11.930000 audit[5403]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd30c08810 a2=94 a3=54428f items=0 ppid=5057 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.930000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:10:11.931000 audit: BPF prog-id=217 op=LOAD Jan 19 12:10:11.931000 audit[5403]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd30c08840 a2=94 a3=2 items=0 ppid=5057 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.931000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:10:11.933000 audit: BPF prog-id=217 op=UNLOAD Jan 19 12:10:11.933000 audit[5403]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd30c08840 a2=0 a3=2 items=0 ppid=5057 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.933000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:10:11.937000 audit: BPF prog-id=218 op=LOAD Jan 19 12:10:11.937000 audit[5403]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd30c085f0 a2=94 a3=4 items=0 ppid=5057 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.937000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:10:11.937000 audit: BPF prog-id=218 op=UNLOAD Jan 19 12:10:11.937000 audit[5403]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd30c085f0 a2=94 a3=4 items=0 ppid=5057 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.937000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:10:11.937000 audit: BPF prog-id=219 op=LOAD Jan 19 12:10:11.937000 audit[5403]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd30c086f0 a2=94 a3=7ffd30c08870 items=0 ppid=5057 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.937000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:10:11.937000 audit: BPF prog-id=219 op=UNLOAD Jan 19 12:10:11.937000 audit[5403]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd30c086f0 a2=0 a3=7ffd30c08870 items=0 ppid=5057 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.937000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:10:11.939000 audit: BPF prog-id=220 op=LOAD Jan 19 12:10:11.939000 audit[5403]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd30c07e20 a2=94 a3=2 items=0 ppid=5057 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.939000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:10:11.939000 audit: BPF prog-id=220 op=UNLOAD Jan 19 12:10:11.939000 audit[5403]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd30c07e20 a2=0 a3=2 items=0 ppid=5057 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.939000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:10:11.939000 audit: BPF prog-id=221 op=LOAD Jan 19 12:10:11.939000 audit[5403]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd30c07f20 a2=94 a3=30 items=0 ppid=5057 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.939000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:10:11.971000 audit: BPF prog-id=222 op=LOAD Jan 19 12:10:11.971000 audit[5411]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc33302080 a2=98 a3=0 items=0 ppid=5057 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.971000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:10:11.972000 audit: BPF prog-id=222 op=UNLOAD Jan 19 12:10:11.972000 audit[5411]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc33302050 a3=0 items=0 ppid=5057 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:10:11.976000 audit: BPF prog-id=223 op=LOAD Jan 19 12:10:11.976000 audit[5411]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc33301e70 a2=94 a3=54428f items=0 ppid=5057 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.976000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:10:11.976000 audit: BPF prog-id=223 op=UNLOAD Jan 19 12:10:11.976000 audit[5411]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc33301e70 a2=94 a3=54428f items=0 ppid=5057 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.976000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:10:11.976000 audit: BPF prog-id=224 op=LOAD Jan 19 12:10:11.976000 audit[5411]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc33301ea0 a2=94 a3=2 items=0 ppid=5057 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.976000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:10:11.976000 audit: BPF prog-id=224 op=UNLOAD Jan 19 12:10:11.976000 audit[5411]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc33301ea0 a2=0 a3=2 items=0 ppid=5057 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:11.976000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:10:12.237000 audit: BPF prog-id=225 op=LOAD Jan 19 12:10:12.237000 audit[5411]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc33301d60 a2=94 a3=1 items=0 ppid=5057 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:12.237000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:10:12.237000 audit: BPF prog-id=225 op=UNLOAD Jan 19 12:10:12.237000 audit[5411]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc33301d60 a2=94 a3=1 items=0 ppid=5057 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:12.237000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:10:12.248000 audit: BPF prog-id=226 op=LOAD Jan 19 12:10:12.248000 audit[5411]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc33301d50 a2=94 a3=4 items=0 ppid=5057 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:12.248000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:10:12.248000 audit: BPF prog-id=226 op=UNLOAD Jan 19 12:10:12.248000 audit[5411]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc33301d50 a2=0 a3=4 items=0 ppid=5057 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:12.248000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:10:12.248000 audit: BPF prog-id=227 op=LOAD Jan 19 12:10:12.248000 audit[5411]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc33301bb0 a2=94 a3=5 items=0 ppid=5057 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:12.248000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:10:12.248000 audit: BPF prog-id=227 op=UNLOAD Jan 19 12:10:12.248000 audit[5411]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc33301bb0 a2=0 a3=5 items=0 ppid=5057 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:12.248000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:10:12.248000 audit: BPF prog-id=228 op=LOAD Jan 19 12:10:12.248000 audit[5411]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc33301dd0 a2=94 a3=6 items=0 ppid=5057 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:12.248000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:10:12.248000 audit: BPF prog-id=228 op=UNLOAD Jan 19 12:10:12.248000 audit[5411]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc33301dd0 a2=0 a3=6 items=0 ppid=5057 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:12.248000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:10:12.249000 audit: BPF prog-id=229 op=LOAD Jan 19 12:10:12.249000 audit[5411]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc33301580 a2=94 a3=88 items=0 ppid=5057 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:12.249000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:10:12.249000 audit: BPF prog-id=230 op=LOAD Jan 19 12:10:12.249000 audit[5411]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc33301400 a2=94 a3=2 items=0 ppid=5057 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:12.249000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:10:12.249000 audit: BPF prog-id=230 op=UNLOAD Jan 19 12:10:12.249000 audit[5411]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc33301430 a2=0 a3=7ffc33301530 items=0 ppid=5057 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:12.249000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:10:12.250000 audit: BPF prog-id=229 op=UNLOAD Jan 19 12:10:12.250000 audit[5411]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=31daad10 a2=0 a3=5cd95a9121a3fd68 items=0 ppid=5057 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:12.250000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:10:12.268000 audit: BPF prog-id=221 op=UNLOAD Jan 19 12:10:12.268000 audit[5057]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c001020340 a2=0 a3=0 items=0 ppid=5035 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:12.268000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 19 12:10:12.471021 kernel: kauditd_printk_skb: 309 callbacks suppressed Jan 19 12:10:12.471459 kernel: audit: type=1325 audit(1768824612.444:701): table=mangle:123 family=2 entries=16 op=nft_register_chain pid=5437 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 19 12:10:12.444000 audit[5437]: NETFILTER_CFG table=mangle:123 family=2 entries=16 op=nft_register_chain pid=5437 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 19 12:10:12.444000 audit[5437]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffff4e9d4f0 a2=0 a3=7ffff4e9d4dc items=0 ppid=5057 pid=5437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:12.515289 kernel: audit: type=1300 audit(1768824612.444:701): arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffff4e9d4f0 a2=0 a3=7ffff4e9d4dc items=0 ppid=5057 pid=5437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:12.515467 kernel: audit: type=1327 audit(1768824612.444:701): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 19 12:10:12.444000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 19 12:10:12.453000 audit[5441]: NETFILTER_CFG table=nat:124 family=2 entries=15 op=nft_register_chain pid=5441 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 19 12:10:12.536015 kubelet[2862]: E0119 12:10:12.535531 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9qp6h" podUID="c862d411-4d4f-4a97-b967-49e1eb15851d" Jan 19 12:10:12.536015 kubelet[2862]: E0119 12:10:12.535722 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-975cd56bc-wkczn" podUID="75367eb7-d5a3-4610-be00-cbd5e7d7db9d" Jan 19 12:10:12.540530 kubelet[2862]: E0119 12:10:12.540476 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c46bc687f-z2lmj" podUID="acf44a01-9bd4-43fa-8dda-bec90148f2fd" Jan 19 12:10:12.453000 audit[5441]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffdf6b9b560 a2=0 a3=7ffdf6b9b54c items=0 ppid=5057 pid=5441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:12.581552 kernel: audit: type=1325 audit(1768824612.453:702): table=nat:124 family=2 entries=15 op=nft_register_chain pid=5441 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 19 12:10:12.581683 kernel: audit: type=1300 audit(1768824612.453:702): arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffdf6b9b560 a2=0 a3=7ffdf6b9b54c items=0 ppid=5057 pid=5441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:12.581725 kernel: audit: type=1327 audit(1768824612.453:702): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 19 12:10:12.453000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 19 12:10:12.528000 audit[5438]: NETFILTER_CFG table=raw:125 family=2 entries=21 op=nft_register_chain pid=5438 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 19 12:10:12.528000 audit[5438]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffdffdbe9d0 a2=0 a3=7ffdffdbe9bc items=0 ppid=5057 pid=5438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:12.652541 kernel: audit: type=1325 audit(1768824612.528:703): table=raw:125 family=2 entries=21 op=nft_register_chain pid=5438 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 19 12:10:12.652614 kernel: audit: type=1300 audit(1768824612.528:703): arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffdffdbe9d0 a2=0 a3=7ffdffdbe9bc items=0 ppid=5057 pid=5438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:12.652675 kernel: audit: type=1327 audit(1768824612.528:703): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 19 12:10:12.528000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 19 12:10:12.674000 audit[5446]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=5446 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:10:12.674000 audit[5446]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff64a600c0 a2=0 a3=7fff64a600ac items=0 ppid=3029 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:12.674000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:10:12.693487 kernel: audit: type=1325 audit(1768824612.674:704): table=filter:126 family=2 entries=20 op=nft_register_rule pid=5446 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:10:12.706000 audit[5446]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=5446 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:10:12.706000 audit[5446]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff64a600c0 a2=0 a3=0 items=0 ppid=3029 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:12.706000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:10:12.726000 audit[5440]: NETFILTER_CFG table=filter:128 family=2 entries=228 op=nft_register_chain pid=5440 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 19 12:10:12.726000 audit[5440]: SYSCALL arch=c000003e syscall=46 success=yes exit=134056 a0=3 a1=7fff38dd8a30 a2=0 a3=0 items=0 ppid=5057 pid=5440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:12.726000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 19 12:10:13.482989 systemd-networkd[1519]: vxlan.calico: Gained IPv6LL Jan 19 12:10:13.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.55:22-10.0.0.1:33024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:13.833421 systemd[1]: Started sshd@9-10.0.0.55:22-10.0.0.1:33024.service - OpenSSH per-connection server daemon (10.0.0.1:33024). Jan 19 12:10:13.967000 audit[5453]: USER_ACCT pid=5453 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:13.968963 sshd[5453]: Accepted publickey for core from 10.0.0.1 port 33024 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:10:13.969000 audit[5453]: CRED_ACQ pid=5453 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:13.969000 audit[5453]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7018d140 a2=3 a3=0 items=0 ppid=1 pid=5453 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:13.969000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:10:13.972671 sshd-session[5453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:10:13.989932 systemd-logind[1598]: New session 11 of user core. Jan 19 12:10:13.997813 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 19 12:10:14.002000 audit[5453]: USER_START pid=5453 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:14.006000 audit[5457]: CRED_ACQ pid=5457 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:14.230709 sshd[5457]: Connection closed by 10.0.0.1 port 33024 Jan 19 12:10:14.231520 sshd-session[5453]: pam_unix(sshd:session): session closed for user core Jan 19 12:10:14.233000 audit[5453]: USER_END pid=5453 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:14.234000 audit[5453]: CRED_DISP pid=5453 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:14.244909 systemd[1]: sshd@9-10.0.0.55:22-10.0.0.1:33024.service: Deactivated successfully. Jan 19 12:10:14.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.55:22-10.0.0.1:33024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:14.248672 systemd[1]: session-11.scope: Deactivated successfully. Jan 19 12:10:14.253453 systemd-logind[1598]: Session 11 logged out. Waiting for processes to exit. Jan 19 12:10:14.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.55:22-10.0.0.1:33032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:14.259642 systemd[1]: Started sshd@10-10.0.0.55:22-10.0.0.1:33032.service - OpenSSH per-connection server daemon (10.0.0.1:33032). Jan 19 12:10:14.262035 systemd-logind[1598]: Removed session 11. Jan 19 12:10:14.405000 audit[5472]: USER_ACCT pid=5472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:14.407482 sshd[5472]: Accepted publickey for core from 10.0.0.1 port 33032 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:10:14.407000 audit[5472]: CRED_ACQ pid=5472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:14.407000 audit[5472]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc8545ef0 a2=3 a3=0 items=0 ppid=1 pid=5472 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:14.407000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:10:14.411526 sshd-session[5472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:10:14.424962 systemd-logind[1598]: New session 12 of user core. Jan 19 12:10:14.446902 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 19 12:10:14.452000 audit[5472]: USER_START pid=5472 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:14.457000 audit[5476]: CRED_ACQ pid=5476 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:14.819325 sshd[5476]: Connection closed by 10.0.0.1 port 33032 Jan 19 12:10:14.819682 sshd-session[5472]: pam_unix(sshd:session): session closed for user core Jan 19 12:10:14.824000 audit[5472]: USER_END pid=5472 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:14.824000 audit[5472]: CRED_DISP pid=5472 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:14.837793 systemd[1]: sshd@10-10.0.0.55:22-10.0.0.1:33032.service: Deactivated successfully. Jan 19 12:10:14.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.55:22-10.0.0.1:33032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:14.846974 systemd[1]: session-12.scope: Deactivated successfully. Jan 19 12:10:14.856714 systemd-logind[1598]: Session 12 logged out. Waiting for processes to exit. Jan 19 12:10:14.866456 systemd[1]: Started sshd@11-10.0.0.55:22-10.0.0.1:33044.service - OpenSSH per-connection server daemon (10.0.0.1:33044). Jan 19 12:10:14.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.55:22-10.0.0.1:33044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:14.873566 systemd-logind[1598]: Removed session 12. Jan 19 12:10:14.981000 audit[5487]: USER_ACCT pid=5487 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:14.984655 sshd[5487]: Accepted publickey for core from 10.0.0.1 port 33044 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:10:14.984000 audit[5487]: CRED_ACQ pid=5487 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:14.985000 audit[5487]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5d333fa0 a2=3 a3=0 items=0 ppid=1 pid=5487 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:14.985000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:10:14.988743 sshd-session[5487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:10:15.003080 systemd-logind[1598]: New session 13 of user core. Jan 19 12:10:15.018788 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 19 12:10:15.024000 audit[5487]: USER_START pid=5487 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:15.028000 audit[5491]: CRED_ACQ pid=5491 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:15.228837 sshd[5491]: Connection closed by 10.0.0.1 port 33044 Jan 19 12:10:15.229520 sshd-session[5487]: pam_unix(sshd:session): session closed for user core Jan 19 12:10:15.232000 audit[5487]: USER_END pid=5487 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:15.233000 audit[5487]: CRED_DISP pid=5487 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:15.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.55:22-10.0.0.1:33044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:15.238581 systemd[1]: sshd@11-10.0.0.55:22-10.0.0.1:33044.service: Deactivated successfully. Jan 19 12:10:15.243698 systemd[1]: session-13.scope: Deactivated successfully. Jan 19 12:10:15.247582 systemd-logind[1598]: Session 13 logged out. Waiting for processes to exit. Jan 19 12:10:15.250776 systemd-logind[1598]: Removed session 13. Jan 19 12:10:15.437679 kubelet[2862]: E0119 12:10:15.437279 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:10:15.438306 containerd[1618]: time="2026-01-19T12:10:15.437858088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5scwx,Uid:81e1841f-312c-44d7-b340-4b8d02b8d37b,Namespace:kube-system,Attempt:0,}" Jan 19 12:10:15.822956 systemd-networkd[1519]: cali0185d9e1472: Link UP Jan 19 12:10:15.824585 systemd-networkd[1519]: cali0185d9e1472: Gained carrier Jan 19 12:10:15.861032 containerd[1618]: 2026-01-19 12:10:15.571 [INFO][5505] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--5scwx-eth0 coredns-66bc5c9577- kube-system 81e1841f-312c-44d7-b340-4b8d02b8d37b 943 0 2026-01-19 12:08:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-5scwx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0185d9e1472 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782" Namespace="kube-system" Pod="coredns-66bc5c9577-5scwx" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5scwx-" Jan 19 12:10:15.861032 containerd[1618]: 2026-01-19 12:10:15.572 [INFO][5505] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782" Namespace="kube-system" Pod="coredns-66bc5c9577-5scwx" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5scwx-eth0" Jan 19 12:10:15.861032 containerd[1618]: 2026-01-19 12:10:15.696 [INFO][5519] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782" HandleID="k8s-pod-network.aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782" Workload="localhost-k8s-coredns--66bc5c9577--5scwx-eth0" Jan 19 12:10:15.861693 containerd[1618]: 2026-01-19 12:10:15.696 [INFO][5519] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782" HandleID="k8s-pod-network.aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782" Workload="localhost-k8s-coredns--66bc5c9577--5scwx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00050b410), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-5scwx", "timestamp":"2026-01-19 12:10:15.696082613 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 19 12:10:15.861693 containerd[1618]: 2026-01-19 12:10:15.697 [INFO][5519] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 19 12:10:15.861693 containerd[1618]: 2026-01-19 12:10:15.697 [INFO][5519] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 19 12:10:15.861693 containerd[1618]: 2026-01-19 12:10:15.697 [INFO][5519] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 19 12:10:15.861693 containerd[1618]: 2026-01-19 12:10:15.711 [INFO][5519] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782" host="localhost" Jan 19 12:10:15.861693 containerd[1618]: 2026-01-19 12:10:15.723 [INFO][5519] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 19 12:10:15.861693 containerd[1618]: 2026-01-19 12:10:15.743 [INFO][5519] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 19 12:10:15.861693 containerd[1618]: 2026-01-19 12:10:15.750 [INFO][5519] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 19 12:10:15.861693 containerd[1618]: 2026-01-19 12:10:15.760 [INFO][5519] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 19 12:10:15.861693 containerd[1618]: 2026-01-19 12:10:15.761 [INFO][5519] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782" host="localhost" Jan 19 12:10:15.862466 containerd[1618]: 2026-01-19 12:10:15.767 [INFO][5519] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782 Jan 19 12:10:15.862466 containerd[1618]: 2026-01-19 12:10:15.794 [INFO][5519] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782" host="localhost" Jan 19 12:10:15.862466 containerd[1618]: 2026-01-19 12:10:15.807 [INFO][5519] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782" host="localhost" Jan 19 12:10:15.862466 containerd[1618]: 2026-01-19 12:10:15.807 [INFO][5519] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782" host="localhost" Jan 19 12:10:15.862466 containerd[1618]: 2026-01-19 12:10:15.807 [INFO][5519] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 19 12:10:15.862466 containerd[1618]: 2026-01-19 12:10:15.807 [INFO][5519] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782" HandleID="k8s-pod-network.aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782" Workload="localhost-k8s-coredns--66bc5c9577--5scwx-eth0" Jan 19 12:10:15.862653 containerd[1618]: 2026-01-19 12:10:15.815 [INFO][5505] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782" Namespace="kube-system" Pod="coredns-66bc5c9577-5scwx" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5scwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--5scwx-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"81e1841f-312c-44d7-b340-4b8d02b8d37b", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 8, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-5scwx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0185d9e1472", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:10:15.862653 containerd[1618]: 2026-01-19 12:10:15.816 [INFO][5505] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782" Namespace="kube-system" Pod="coredns-66bc5c9577-5scwx" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5scwx-eth0" Jan 19 12:10:15.862653 containerd[1618]: 2026-01-19 12:10:15.816 [INFO][5505] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0185d9e1472 ContainerID="aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782" Namespace="kube-system" Pod="coredns-66bc5c9577-5scwx" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5scwx-eth0" Jan 19 12:10:15.862653 containerd[1618]: 2026-01-19 12:10:15.823 [INFO][5505] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782" Namespace="kube-system" Pod="coredns-66bc5c9577-5scwx" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5scwx-eth0" Jan 19 12:10:15.862653 containerd[1618]: 2026-01-19 12:10:15.824 [INFO][5505] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782" Namespace="kube-system" Pod="coredns-66bc5c9577-5scwx" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5scwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--5scwx-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"81e1841f-312c-44d7-b340-4b8d02b8d37b", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 8, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782", Pod:"coredns-66bc5c9577-5scwx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0185d9e1472", MAC:"c2:80:e3:03:74:89", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:10:15.862653 containerd[1618]: 2026-01-19 12:10:15.849 [INFO][5505] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782" Namespace="kube-system" Pod="coredns-66bc5c9577-5scwx" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5scwx-eth0" Jan 19 12:10:15.904000 audit[5537]: NETFILTER_CFG table=filter:129 family=2 entries=58 op=nft_register_chain pid=5537 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 19 12:10:15.904000 audit[5537]: SYSCALL arch=c000003e syscall=46 success=yes exit=27304 a0=3 a1=7fff76234ad0 a2=0 a3=7fff76234abc items=0 ppid=5057 pid=5537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:15.904000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 19 12:10:15.948467 containerd[1618]: time="2026-01-19T12:10:15.947929986Z" level=info msg="connecting to shim aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782" address="unix:///run/containerd/s/9e14858b13fd77730210815e837f6e952d2342637bcfeb427f4a8fc367b7d8e8" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:10:16.040589 systemd[1]: Started cri-containerd-aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782.scope - libcontainer container aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782. Jan 19 12:10:16.073000 audit: BPF prog-id=231 op=LOAD Jan 19 12:10:16.076000 audit: BPF prog-id=232 op=LOAD Jan 19 12:10:16.076000 audit[5558]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5546 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:16.076000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161663536616134343036613566633334663635303063653134353637 Jan 19 12:10:16.077000 audit: BPF prog-id=232 op=UNLOAD Jan 19 12:10:16.077000 audit[5558]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5546 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:16.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161663536616134343036613566633334663635303063653134353637 Jan 19 12:10:16.077000 audit: BPF prog-id=233 op=LOAD Jan 19 12:10:16.077000 audit[5558]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5546 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:16.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161663536616134343036613566633334663635303063653134353637 Jan 19 12:10:16.077000 audit: BPF prog-id=234 op=LOAD Jan 19 12:10:16.077000 audit[5558]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=5546 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:16.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161663536616134343036613566633334663635303063653134353637 Jan 19 12:10:16.077000 audit: BPF prog-id=234 op=UNLOAD Jan 19 12:10:16.077000 audit[5558]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5546 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:16.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161663536616134343036613566633334663635303063653134353637 Jan 19 12:10:16.077000 audit: BPF prog-id=233 op=UNLOAD Jan 19 12:10:16.077000 audit[5558]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5546 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:16.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161663536616134343036613566633334663635303063653134353637 Jan 19 12:10:16.077000 audit: BPF prog-id=235 op=LOAD Jan 19 12:10:16.077000 audit[5558]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=5546 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:16.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161663536616134343036613566633334663635303063653134353637 Jan 19 12:10:16.084952 systemd-resolved[1294]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 19 12:10:16.219851 containerd[1618]: time="2026-01-19T12:10:16.219630295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5scwx,Uid:81e1841f-312c-44d7-b340-4b8d02b8d37b,Namespace:kube-system,Attempt:0,} returns sandbox id \"aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782\"" Jan 19 12:10:16.225521 kubelet[2862]: E0119 12:10:16.224730 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:10:16.237895 containerd[1618]: time="2026-01-19T12:10:16.237507873Z" level=info msg="CreateContainer within sandbox \"aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 19 12:10:16.310583 containerd[1618]: time="2026-01-19T12:10:16.309641640Z" level=info msg="Container a1ad823e96fa3f45cd5e88d48f740183d180371c9e4942a7ae6c7e25a2683bef: CDI devices from CRI Config.CDIDevices: []" Jan 19 12:10:16.309752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4209672363.mount: Deactivated successfully. Jan 19 12:10:16.325835 containerd[1618]: time="2026-01-19T12:10:16.325798144Z" level=info msg="CreateContainer within sandbox \"aaf56aa4406a5fc34f6500ce1456741f0a7b3ea6deed80139cfc4f70afebb782\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a1ad823e96fa3f45cd5e88d48f740183d180371c9e4942a7ae6c7e25a2683bef\"" Jan 19 12:10:16.330036 containerd[1618]: time="2026-01-19T12:10:16.329834022Z" level=info msg="StartContainer for \"a1ad823e96fa3f45cd5e88d48f740183d180371c9e4942a7ae6c7e25a2683bef\"" Jan 19 12:10:16.332786 containerd[1618]: time="2026-01-19T12:10:16.332707298Z" level=info msg="connecting to shim a1ad823e96fa3f45cd5e88d48f740183d180371c9e4942a7ae6c7e25a2683bef" address="unix:///run/containerd/s/9e14858b13fd77730210815e837f6e952d2342637bcfeb427f4a8fc367b7d8e8" protocol=ttrpc version=3 Jan 19 12:10:16.405781 systemd[1]: Started cri-containerd-a1ad823e96fa3f45cd5e88d48f740183d180371c9e4942a7ae6c7e25a2683bef.scope - libcontainer container a1ad823e96fa3f45cd5e88d48f740183d180371c9e4942a7ae6c7e25a2683bef. Jan 19 12:10:16.435567 containerd[1618]: time="2026-01-19T12:10:16.435482249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595df97b5c-7r9mw,Uid:57042a5e-1534-485c-abeb-75f1e57f8cf0,Namespace:calico-apiserver,Attempt:0,}" Jan 19 12:10:16.474000 audit: BPF prog-id=236 op=LOAD Jan 19 12:10:16.476000 audit: BPF prog-id=237 op=LOAD Jan 19 12:10:16.476000 audit[5582]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5546 pid=5582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:16.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131616438323365393666613366343563643565383864343866373430 Jan 19 12:10:16.476000 audit: BPF prog-id=237 op=UNLOAD Jan 19 12:10:16.476000 audit[5582]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5546 pid=5582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:16.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131616438323365393666613366343563643565383864343866373430 Jan 19 12:10:16.481000 audit: BPF prog-id=238 op=LOAD Jan 19 12:10:16.481000 audit[5582]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5546 pid=5582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:16.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131616438323365393666613366343563643565383864343866373430 Jan 19 12:10:16.481000 audit: BPF prog-id=239 op=LOAD Jan 19 12:10:16.481000 audit[5582]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5546 pid=5582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:16.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131616438323365393666613366343563643565383864343866373430 Jan 19 12:10:16.481000 audit: BPF prog-id=239 op=UNLOAD Jan 19 12:10:16.481000 audit[5582]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5546 pid=5582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:16.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131616438323365393666613366343563643565383864343866373430 Jan 19 12:10:16.481000 audit: BPF prog-id=238 op=UNLOAD Jan 19 12:10:16.481000 audit[5582]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5546 pid=5582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:16.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131616438323365393666613366343563643565383864343866373430 Jan 19 12:10:16.481000 audit: BPF prog-id=240 op=LOAD Jan 19 12:10:16.481000 audit[5582]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5546 pid=5582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:16.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131616438323365393666613366343563643565383864343866373430 Jan 19 12:10:16.596676 containerd[1618]: time="2026-01-19T12:10:16.595617736Z" level=info msg="StartContainer for \"a1ad823e96fa3f45cd5e88d48f740183d180371c9e4942a7ae6c7e25a2683bef\" returns successfully" Jan 19 12:10:16.876019 systemd-networkd[1519]: cali0185d9e1472: Gained IPv6LL Jan 19 12:10:16.916948 systemd-networkd[1519]: calia01efa88280: Link UP Jan 19 12:10:16.918848 systemd-networkd[1519]: calia01efa88280: Gained carrier Jan 19 12:10:16.963805 containerd[1618]: 2026-01-19 12:10:16.647 [INFO][5603] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--595df97b5c--7r9mw-eth0 calico-apiserver-595df97b5c- calico-apiserver 57042a5e-1534-485c-abeb-75f1e57f8cf0 940 0 2026-01-19 12:08:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:595df97b5c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-595df97b5c-7r9mw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia01efa88280 [] [] }} ContainerID="e128be0c8f2b6eeda8da20e8cc4a184c775009f9f313937f3bfa3634e6699100" Namespace="calico-apiserver" Pod="calico-apiserver-595df97b5c-7r9mw" WorkloadEndpoint="localhost-k8s-calico--apiserver--595df97b5c--7r9mw-" Jan 19 12:10:16.963805 containerd[1618]: 2026-01-19 12:10:16.647 [INFO][5603] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e128be0c8f2b6eeda8da20e8cc4a184c775009f9f313937f3bfa3634e6699100" Namespace="calico-apiserver" Pod="calico-apiserver-595df97b5c-7r9mw" WorkloadEndpoint="localhost-k8s-calico--apiserver--595df97b5c--7r9mw-eth0" Jan 19 12:10:16.963805 containerd[1618]: 2026-01-19 12:10:16.787 [INFO][5629] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e128be0c8f2b6eeda8da20e8cc4a184c775009f9f313937f3bfa3634e6699100" HandleID="k8s-pod-network.e128be0c8f2b6eeda8da20e8cc4a184c775009f9f313937f3bfa3634e6699100" Workload="localhost-k8s-calico--apiserver--595df97b5c--7r9mw-eth0" Jan 19 12:10:16.963805 containerd[1618]: 2026-01-19 12:10:16.787 [INFO][5629] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e128be0c8f2b6eeda8da20e8cc4a184c775009f9f313937f3bfa3634e6699100" HandleID="k8s-pod-network.e128be0c8f2b6eeda8da20e8cc4a184c775009f9f313937f3bfa3634e6699100" Workload="localhost-k8s-calico--apiserver--595df97b5c--7r9mw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5b70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-595df97b5c-7r9mw", "timestamp":"2026-01-19 12:10:16.787027823 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 19 12:10:16.963805 containerd[1618]: 2026-01-19 12:10:16.788 [INFO][5629] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 19 12:10:16.963805 containerd[1618]: 2026-01-19 12:10:16.788 [INFO][5629] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 19 12:10:16.963805 containerd[1618]: 2026-01-19 12:10:16.788 [INFO][5629] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 19 12:10:16.963805 containerd[1618]: 2026-01-19 12:10:16.805 [INFO][5629] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e128be0c8f2b6eeda8da20e8cc4a184c775009f9f313937f3bfa3634e6699100" host="localhost" Jan 19 12:10:16.963805 containerd[1618]: 2026-01-19 12:10:16.827 [INFO][5629] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 19 12:10:16.963805 containerd[1618]: 2026-01-19 12:10:16.848 [INFO][5629] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 19 12:10:16.963805 containerd[1618]: 2026-01-19 12:10:16.855 [INFO][5629] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 19 12:10:16.963805 containerd[1618]: 2026-01-19 12:10:16.861 [INFO][5629] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 19 12:10:16.963805 containerd[1618]: 2026-01-19 12:10:16.862 [INFO][5629] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e128be0c8f2b6eeda8da20e8cc4a184c775009f9f313937f3bfa3634e6699100" host="localhost" Jan 19 12:10:16.963805 containerd[1618]: 2026-01-19 12:10:16.868 [INFO][5629] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e128be0c8f2b6eeda8da20e8cc4a184c775009f9f313937f3bfa3634e6699100 Jan 19 12:10:16.963805 containerd[1618]: 2026-01-19 12:10:16.885 [INFO][5629] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e128be0c8f2b6eeda8da20e8cc4a184c775009f9f313937f3bfa3634e6699100" host="localhost" Jan 19 12:10:16.963805 containerd[1618]: 2026-01-19 12:10:16.899 [INFO][5629] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.e128be0c8f2b6eeda8da20e8cc4a184c775009f9f313937f3bfa3634e6699100" host="localhost" Jan 19 12:10:16.963805 containerd[1618]: 2026-01-19 12:10:16.899 [INFO][5629] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.e128be0c8f2b6eeda8da20e8cc4a184c775009f9f313937f3bfa3634e6699100" host="localhost" Jan 19 12:10:16.963805 containerd[1618]: 2026-01-19 12:10:16.899 [INFO][5629] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 19 12:10:16.963805 containerd[1618]: 2026-01-19 12:10:16.899 [INFO][5629] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="e128be0c8f2b6eeda8da20e8cc4a184c775009f9f313937f3bfa3634e6699100" HandleID="k8s-pod-network.e128be0c8f2b6eeda8da20e8cc4a184c775009f9f313937f3bfa3634e6699100" Workload="localhost-k8s-calico--apiserver--595df97b5c--7r9mw-eth0" Jan 19 12:10:16.965561 containerd[1618]: 2026-01-19 12:10:16.906 [INFO][5603] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e128be0c8f2b6eeda8da20e8cc4a184c775009f9f313937f3bfa3634e6699100" Namespace="calico-apiserver" Pod="calico-apiserver-595df97b5c-7r9mw" WorkloadEndpoint="localhost-k8s-calico--apiserver--595df97b5c--7r9mw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--595df97b5c--7r9mw-eth0", GenerateName:"calico-apiserver-595df97b5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"57042a5e-1534-485c-abeb-75f1e57f8cf0", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"595df97b5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-595df97b5c-7r9mw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia01efa88280", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:10:16.965561 containerd[1618]: 2026-01-19 12:10:16.907 [INFO][5603] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="e128be0c8f2b6eeda8da20e8cc4a184c775009f9f313937f3bfa3634e6699100" Namespace="calico-apiserver" Pod="calico-apiserver-595df97b5c-7r9mw" WorkloadEndpoint="localhost-k8s-calico--apiserver--595df97b5c--7r9mw-eth0" Jan 19 12:10:16.965561 containerd[1618]: 2026-01-19 12:10:16.907 [INFO][5603] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia01efa88280 ContainerID="e128be0c8f2b6eeda8da20e8cc4a184c775009f9f313937f3bfa3634e6699100" Namespace="calico-apiserver" Pod="calico-apiserver-595df97b5c-7r9mw" WorkloadEndpoint="localhost-k8s-calico--apiserver--595df97b5c--7r9mw-eth0" Jan 19 12:10:16.965561 containerd[1618]: 2026-01-19 12:10:16.919 [INFO][5603] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e128be0c8f2b6eeda8da20e8cc4a184c775009f9f313937f3bfa3634e6699100" Namespace="calico-apiserver" Pod="calico-apiserver-595df97b5c-7r9mw" WorkloadEndpoint="localhost-k8s-calico--apiserver--595df97b5c--7r9mw-eth0" Jan 19 12:10:16.965561 containerd[1618]: 2026-01-19 12:10:16.932 [INFO][5603] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e128be0c8f2b6eeda8da20e8cc4a184c775009f9f313937f3bfa3634e6699100" Namespace="calico-apiserver" Pod="calico-apiserver-595df97b5c-7r9mw" WorkloadEndpoint="localhost-k8s-calico--apiserver--595df97b5c--7r9mw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--595df97b5c--7r9mw-eth0", GenerateName:"calico-apiserver-595df97b5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"57042a5e-1534-485c-abeb-75f1e57f8cf0", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"595df97b5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e128be0c8f2b6eeda8da20e8cc4a184c775009f9f313937f3bfa3634e6699100", Pod:"calico-apiserver-595df97b5c-7r9mw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia01efa88280", MAC:"3e:3e:77:39:ec:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:10:16.965561 containerd[1618]: 2026-01-19 12:10:16.954 [INFO][5603] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e128be0c8f2b6eeda8da20e8cc4a184c775009f9f313937f3bfa3634e6699100" Namespace="calico-apiserver" Pod="calico-apiserver-595df97b5c-7r9mw" WorkloadEndpoint="localhost-k8s-calico--apiserver--595df97b5c--7r9mw-eth0" Jan 19 12:10:17.035000 audit[5650]: NETFILTER_CFG table=filter:130 family=2 entries=63 op=nft_register_chain pid=5650 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 19 12:10:17.035000 audit[5650]: SYSCALL arch=c000003e syscall=46 success=yes exit=30680 a0=3 a1=7ffcd45ac250 a2=0 a3=7ffcd45ac23c items=0 ppid=5057 pid=5650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:17.035000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 19 12:10:17.065496 containerd[1618]: time="2026-01-19T12:10:17.065080593Z" level=info msg="connecting to shim e128be0c8f2b6eeda8da20e8cc4a184c775009f9f313937f3bfa3634e6699100" address="unix:///run/containerd/s/5967ab6d3d3d02bdec1c12ec03be2ee34d6f12d63b000900492f09e69fd410ba" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:10:17.141007 systemd[1]: Started cri-containerd-e128be0c8f2b6eeda8da20e8cc4a184c775009f9f313937f3bfa3634e6699100.scope - libcontainer container e128be0c8f2b6eeda8da20e8cc4a184c775009f9f313937f3bfa3634e6699100. Jan 19 12:10:17.192000 audit: BPF prog-id=241 op=LOAD Jan 19 12:10:17.194000 audit: BPF prog-id=242 op=LOAD Jan 19 12:10:17.194000 audit[5669]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000206238 a2=98 a3=0 items=0 ppid=5659 pid=5669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:17.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531323862653063386632623665656461386461323065386363346131 Jan 19 12:10:17.194000 audit: BPF prog-id=242 op=UNLOAD Jan 19 12:10:17.194000 audit[5669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5659 pid=5669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:17.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531323862653063386632623665656461386461323065386363346131 Jan 19 12:10:17.194000 audit: BPF prog-id=243 op=LOAD Jan 19 12:10:17.194000 audit[5669]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000206488 a2=98 a3=0 items=0 ppid=5659 pid=5669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:17.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531323862653063386632623665656461386461323065386363346131 Jan 19 12:10:17.195000 audit: BPF prog-id=244 op=LOAD Jan 19 12:10:17.195000 audit[5669]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000206218 a2=98 a3=0 items=0 ppid=5659 pid=5669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:17.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531323862653063386632623665656461386461323065386363346131 Jan 19 12:10:17.195000 audit: BPF prog-id=244 op=UNLOAD Jan 19 12:10:17.195000 audit[5669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5659 pid=5669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:17.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531323862653063386632623665656461386461323065386363346131 Jan 19 12:10:17.195000 audit: BPF prog-id=243 op=UNLOAD Jan 19 12:10:17.195000 audit[5669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5659 pid=5669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:17.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531323862653063386632623665656461386461323065386363346131 Jan 19 12:10:17.195000 audit: BPF prog-id=245 op=LOAD Jan 19 12:10:17.195000 audit[5669]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002066e8 a2=98 a3=0 items=0 ppid=5659 pid=5669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:17.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531323862653063386632623665656461386461323065386363346131 Jan 19 12:10:17.201811 systemd-resolved[1294]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 19 12:10:17.300269 containerd[1618]: time="2026-01-19T12:10:17.299618042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-595df97b5c-7r9mw,Uid:57042a5e-1534-485c-abeb-75f1e57f8cf0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e128be0c8f2b6eeda8da20e8cc4a184c775009f9f313937f3bfa3634e6699100\"" Jan 19 12:10:17.306524 containerd[1618]: time="2026-01-19T12:10:17.305679583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 19 12:10:17.405550 containerd[1618]: time="2026-01-19T12:10:17.404967165Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:10:17.409717 containerd[1618]: time="2026-01-19T12:10:17.409504795Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 19 12:10:17.410550 containerd[1618]: time="2026-01-19T12:10:17.409647807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 19 12:10:17.410624 kubelet[2862]: E0119 12:10:17.410315 2862 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:10:17.410624 kubelet[2862]: E0119 12:10:17.410449 2862 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:10:17.411084 kubelet[2862]: E0119 12:10:17.410993 2862 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-595df97b5c-7r9mw_calico-apiserver(57042a5e-1534-485c-abeb-75f1e57f8cf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 19 12:10:17.411084 kubelet[2862]: E0119 12:10:17.411026 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-595df97b5c-7r9mw" podUID="57042a5e-1534-485c-abeb-75f1e57f8cf0" Jan 19 12:10:17.432867 kubelet[2862]: E0119 12:10:17.432684 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:10:17.434643 containerd[1618]: time="2026-01-19T12:10:17.434571239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qcffj,Uid:a49f9c26-4d74-485a-bf88-290c9f9a5235,Namespace:kube-system,Attempt:0,}" Jan 19 12:10:17.613536 kubelet[2862]: E0119 12:10:17.612932 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:10:17.641470 kubelet[2862]: E0119 12:10:17.640758 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-595df97b5c-7r9mw" podUID="57042a5e-1534-485c-abeb-75f1e57f8cf0" Jan 19 12:10:17.649066 kubelet[2862]: I0119 12:10:17.648857 2862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5scwx" podStartSLOduration=105.648842042 podStartE2EDuration="1m45.648842042s" podCreationTimestamp="2026-01-19 12:08:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-19 12:10:17.647848586 +0000 UTC m=+110.898381339" watchObservedRunningTime="2026-01-19 12:10:17.648842042 +0000 UTC m=+110.899374804" Jan 19 12:10:17.779551 kernel: kauditd_printk_skb: 113 callbacks suppressed Jan 19 12:10:17.779708 kernel: audit: type=1325 audit(1768824617.746:760): table=filter:131 family=2 entries=20 op=nft_register_rule pid=5727 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:10:17.746000 audit[5727]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=5727 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:10:17.746000 audit[5727]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffca68a9380 a2=0 a3=7ffca68a936c items=0 ppid=3029 pid=5727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:17.818722 kernel: audit: type=1300 audit(1768824617.746:760): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffca68a9380 a2=0 a3=7ffca68a936c items=0 ppid=3029 pid=5727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:17.818853 kernel: audit: type=1327 audit(1768824617.746:760): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:10:17.746000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:10:17.831542 kernel: audit: type=1325 audit(1768824617.820:761): table=nat:132 family=2 entries=14 op=nft_register_rule pid=5727 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:10:17.820000 audit[5727]: NETFILTER_CFG table=nat:132 family=2 entries=14 op=nft_register_rule pid=5727 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:10:17.847567 kernel: audit: type=1300 audit(1768824617.820:761): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffca68a9380 a2=0 a3=0 items=0 ppid=3029 pid=5727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:17.820000 audit[5727]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffca68a9380 a2=0 a3=0 items=0 ppid=3029 pid=5727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:17.877226 kernel: audit: type=1327 audit(1768824617.820:761): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:10:17.820000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:10:17.895528 systemd-networkd[1519]: cali4becae7c900: Link UP Jan 19 12:10:17.962593 systemd-networkd[1519]: cali4becae7c900: Gained carrier Jan 19 12:10:18.004459 containerd[1618]: 2026-01-19 12:10:17.561 [INFO][5698] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--qcffj-eth0 coredns-66bc5c9577- kube-system a49f9c26-4d74-485a-bf88-290c9f9a5235 936 0 2026-01-19 12:08:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-qcffj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4becae7c900 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2" Namespace="kube-system" Pod="coredns-66bc5c9577-qcffj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qcffj-" Jan 19 12:10:18.004459 containerd[1618]: 2026-01-19 12:10:17.562 [INFO][5698] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2" Namespace="kube-system" Pod="coredns-66bc5c9577-qcffj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qcffj-eth0" Jan 19 12:10:18.004459 containerd[1618]: 2026-01-19 12:10:17.716 [INFO][5718] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2" HandleID="k8s-pod-network.935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2" Workload="localhost-k8s-coredns--66bc5c9577--qcffj-eth0" Jan 19 12:10:18.004459 containerd[1618]: 2026-01-19 12:10:17.719 [INFO][5718] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2" HandleID="k8s-pod-network.935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2" Workload="localhost-k8s-coredns--66bc5c9577--qcffj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f500), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-qcffj", "timestamp":"2026-01-19 12:10:17.716001159 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 19 12:10:18.004459 containerd[1618]: 2026-01-19 12:10:17.719 [INFO][5718] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 19 12:10:18.004459 containerd[1618]: 2026-01-19 12:10:17.719 [INFO][5718] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 19 12:10:18.004459 containerd[1618]: 2026-01-19 12:10:17.719 [INFO][5718] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 19 12:10:18.004459 containerd[1618]: 2026-01-19 12:10:17.750 [INFO][5718] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2" host="localhost" Jan 19 12:10:18.004459 containerd[1618]: 2026-01-19 12:10:17.782 [INFO][5718] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 19 12:10:18.004459 containerd[1618]: 2026-01-19 12:10:17.797 [INFO][5718] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 19 12:10:18.004459 containerd[1618]: 2026-01-19 12:10:17.803 [INFO][5718] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 19 12:10:18.004459 containerd[1618]: 2026-01-19 12:10:17.821 [INFO][5718] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 19 12:10:18.004459 containerd[1618]: 2026-01-19 12:10:17.821 [INFO][5718] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2" host="localhost" Jan 19 12:10:18.004459 containerd[1618]: 2026-01-19 12:10:17.825 [INFO][5718] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2 Jan 19 12:10:18.004459 containerd[1618]: 2026-01-19 12:10:17.840 [INFO][5718] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2" host="localhost" Jan 19 12:10:18.004459 containerd[1618]: 2026-01-19 12:10:17.852 [INFO][5718] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2" host="localhost" Jan 19 12:10:18.004459 containerd[1618]: 2026-01-19 12:10:17.853 [INFO][5718] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2" host="localhost" Jan 19 12:10:18.004459 containerd[1618]: 2026-01-19 12:10:17.853 [INFO][5718] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 19 12:10:18.004459 containerd[1618]: 2026-01-19 12:10:17.853 [INFO][5718] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2" HandleID="k8s-pod-network.935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2" Workload="localhost-k8s-coredns--66bc5c9577--qcffj-eth0" Jan 19 12:10:18.006518 containerd[1618]: 2026-01-19 12:10:17.879 [INFO][5698] cni-plugin/k8s.go 418: Populated endpoint ContainerID="935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2" Namespace="kube-system" Pod="coredns-66bc5c9577-qcffj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qcffj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--qcffj-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a49f9c26-4d74-485a-bf88-290c9f9a5235", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 8, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-qcffj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4becae7c900", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:10:18.006518 containerd[1618]: 2026-01-19 12:10:17.880 [INFO][5698] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2" Namespace="kube-system" Pod="coredns-66bc5c9577-qcffj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qcffj-eth0" Jan 19 12:10:18.006518 containerd[1618]: 2026-01-19 12:10:17.880 [INFO][5698] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4becae7c900 ContainerID="935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2" Namespace="kube-system" Pod="coredns-66bc5c9577-qcffj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qcffj-eth0" Jan 19 12:10:18.006518 containerd[1618]: 2026-01-19 12:10:17.966 [INFO][5698] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2" Namespace="kube-system" Pod="coredns-66bc5c9577-qcffj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qcffj-eth0" Jan 19 12:10:18.006518 containerd[1618]: 2026-01-19 12:10:17.969 [INFO][5698] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2" Namespace="kube-system" Pod="coredns-66bc5c9577-qcffj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qcffj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--qcffj-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a49f9c26-4d74-485a-bf88-290c9f9a5235", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 8, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2", Pod:"coredns-66bc5c9577-qcffj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4becae7c900", MAC:"3a:fa:b8:a3:39:5f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:10:18.006518 containerd[1618]: 2026-01-19 12:10:17.988 [INFO][5698] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2" Namespace="kube-system" Pod="coredns-66bc5c9577-qcffj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qcffj-eth0" Jan 19 12:10:18.052000 audit[5738]: NETFILTER_CFG table=filter:133 family=2 entries=52 op=nft_register_chain pid=5738 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 19 12:10:18.052000 audit[5738]: SYSCALL arch=c000003e syscall=46 success=yes exit=23892 a0=3 a1=7ffd347875d0 a2=0 a3=7ffd347875bc items=0 ppid=5057 pid=5738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:18.095558 containerd[1618]: time="2026-01-19T12:10:18.095474747Z" level=info msg="connecting to shim 935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2" address="unix:///run/containerd/s/b9f5bf5bc991d9754c4fbacab45ad89efd00f2cbac1f6e0de45bdb31c6958459" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:10:18.117597 kernel: audit: type=1325 audit(1768824618.052:762): table=filter:133 family=2 entries=52 op=nft_register_chain pid=5738 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 19 12:10:18.117691 kernel: audit: type=1300 audit(1768824618.052:762): arch=c000003e syscall=46 success=yes exit=23892 a0=3 a1=7ffd347875d0 a2=0 a3=7ffd347875bc items=0 ppid=5057 pid=5738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:18.052000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 19 12:10:18.139548 kernel: audit: type=1327 audit(1768824618.052:762): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 19 12:10:18.202695 systemd[1]: Started cri-containerd-935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2.scope - libcontainer container 935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2. Jan 19 12:10:18.237000 audit: BPF prog-id=246 op=LOAD Jan 19 12:10:18.239000 audit: BPF prog-id=247 op=LOAD Jan 19 12:10:18.239000 audit[5760]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5747 pid=5760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:18.247451 kernel: audit: type=1334 audit(1768824618.237:763): prog-id=246 op=LOAD Jan 19 12:10:18.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933353832366332363430396335353165383234366231643766326630 Jan 19 12:10:18.239000 audit: BPF prog-id=247 op=UNLOAD Jan 19 12:10:18.239000 audit[5760]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5747 pid=5760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:18.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933353832366332363430396335353165383234366231643766326630 Jan 19 12:10:18.239000 audit: BPF prog-id=248 op=LOAD Jan 19 12:10:18.239000 audit[5760]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5747 pid=5760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:18.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933353832366332363430396335353165383234366231643766326630 Jan 19 12:10:18.239000 audit: BPF prog-id=249 op=LOAD Jan 19 12:10:18.239000 audit[5760]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5747 pid=5760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:18.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933353832366332363430396335353165383234366231643766326630 Jan 19 12:10:18.239000 audit: BPF prog-id=249 op=UNLOAD Jan 19 12:10:18.239000 audit[5760]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5747 pid=5760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:18.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933353832366332363430396335353165383234366231643766326630 Jan 19 12:10:18.239000 audit: BPF prog-id=248 op=UNLOAD Jan 19 12:10:18.239000 audit[5760]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5747 pid=5760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:18.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933353832366332363430396335353165383234366231643766326630 Jan 19 12:10:18.239000 audit: BPF prog-id=250 op=LOAD Jan 19 12:10:18.239000 audit[5760]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5747 pid=5760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:18.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933353832366332363430396335353165383234366231643766326630 Jan 19 12:10:18.248660 systemd-resolved[1294]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 19 12:10:18.336926 containerd[1618]: time="2026-01-19T12:10:18.336638741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qcffj,Uid:a49f9c26-4d74-485a-bf88-290c9f9a5235,Namespace:kube-system,Attempt:0,} returns sandbox id \"935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2\"" Jan 19 12:10:18.340718 kubelet[2862]: E0119 12:10:18.339777 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:10:18.349521 systemd-networkd[1519]: calia01efa88280: Gained IPv6LL Jan 19 12:10:18.357299 containerd[1618]: time="2026-01-19T12:10:18.357273713Z" level=info msg="CreateContainer within sandbox \"935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 19 12:10:18.375991 containerd[1618]: time="2026-01-19T12:10:18.375655973Z" level=info msg="Container 56d94f05894fbfdf99edbeb14a6367b07783e433a0a04587404b82205e4e59cc: CDI devices from CRI Config.CDIDevices: []" Jan 19 12:10:18.394061 containerd[1618]: time="2026-01-19T12:10:18.393592270Z" level=info msg="CreateContainer within sandbox \"935826c26409c551e8246b1d7f2f0551cb954486a2f0e2f0b6b4499a880f65b2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"56d94f05894fbfdf99edbeb14a6367b07783e433a0a04587404b82205e4e59cc\"" Jan 19 12:10:18.395278 containerd[1618]: time="2026-01-19T12:10:18.394966227Z" level=info msg="StartContainer for \"56d94f05894fbfdf99edbeb14a6367b07783e433a0a04587404b82205e4e59cc\"" Jan 19 12:10:18.396141 containerd[1618]: time="2026-01-19T12:10:18.395896625Z" level=info msg="connecting to shim 56d94f05894fbfdf99edbeb14a6367b07783e433a0a04587404b82205e4e59cc" address="unix:///run/containerd/s/b9f5bf5bc991d9754c4fbacab45ad89efd00f2cbac1f6e0de45bdb31c6958459" protocol=ttrpc version=3 Jan 19 12:10:18.453575 systemd[1]: Started cri-containerd-56d94f05894fbfdf99edbeb14a6367b07783e433a0a04587404b82205e4e59cc.scope - libcontainer container 56d94f05894fbfdf99edbeb14a6367b07783e433a0a04587404b82205e4e59cc. Jan 19 12:10:18.480000 audit: BPF prog-id=251 op=LOAD Jan 19 12:10:18.481000 audit: BPF prog-id=252 op=LOAD Jan 19 12:10:18.481000 audit[5787]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5747 pid=5787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:18.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536643934663035383934666266646639396564626562313461363336 Jan 19 12:10:18.481000 audit: BPF prog-id=252 op=UNLOAD Jan 19 12:10:18.481000 audit[5787]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5747 pid=5787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:18.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536643934663035383934666266646639396564626562313461363336 Jan 19 12:10:18.481000 audit: BPF prog-id=253 op=LOAD Jan 19 12:10:18.481000 audit[5787]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5747 pid=5787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:18.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536643934663035383934666266646639396564626562313461363336 Jan 19 12:10:18.481000 audit: BPF prog-id=254 op=LOAD Jan 19 12:10:18.481000 audit[5787]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5747 pid=5787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:18.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536643934663035383934666266646639396564626562313461363336 Jan 19 12:10:18.481000 audit: BPF prog-id=254 op=UNLOAD Jan 19 12:10:18.481000 audit[5787]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5747 pid=5787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:18.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536643934663035383934666266646639396564626562313461363336 Jan 19 12:10:18.481000 audit: BPF prog-id=253 op=UNLOAD Jan 19 12:10:18.481000 audit[5787]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5747 pid=5787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:18.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536643934663035383934666266646639396564626562313461363336 Jan 19 12:10:18.482000 audit: BPF prog-id=255 op=LOAD Jan 19 12:10:18.482000 audit[5787]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5747 pid=5787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:18.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536643934663035383934666266646639396564626562313461363336 Jan 19 12:10:18.544471 containerd[1618]: time="2026-01-19T12:10:18.542507686Z" level=info msg="StartContainer for \"56d94f05894fbfdf99edbeb14a6367b07783e433a0a04587404b82205e4e59cc\" returns successfully" Jan 19 12:10:18.652689 kubelet[2862]: E0119 12:10:18.651474 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:10:18.652689 kubelet[2862]: E0119 12:10:18.652254 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:10:18.655689 kubelet[2862]: E0119 12:10:18.654302 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-595df97b5c-7r9mw" podUID="57042a5e-1534-485c-abeb-75f1e57f8cf0" Jan 19 12:10:18.688868 kubelet[2862]: I0119 12:10:18.688464 2862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qcffj" podStartSLOduration=106.688450621 podStartE2EDuration="1m46.688450621s" podCreationTimestamp="2026-01-19 12:08:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-19 12:10:18.68660347 +0000 UTC m=+111.937136213" watchObservedRunningTime="2026-01-19 12:10:18.688450621 +0000 UTC m=+111.938983373" Jan 19 12:10:18.944000 audit[5821]: NETFILTER_CFG table=filter:134 family=2 entries=17 op=nft_register_rule pid=5821 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:10:18.944000 audit[5821]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffce2cd76f0 a2=0 a3=7ffce2cd76dc items=0 ppid=3029 pid=5821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:18.944000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:10:18.967000 audit[5821]: NETFILTER_CFG table=nat:135 family=2 entries=35 op=nft_register_chain pid=5821 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:10:18.967000 audit[5821]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffce2cd76f0 a2=0 a3=7ffce2cd76dc items=0 ppid=3029 pid=5821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:18.967000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:10:19.243001 systemd-networkd[1519]: cali4becae7c900: Gained IPv6LL Jan 19 12:10:19.658610 kubelet[2862]: E0119 12:10:19.657308 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:10:19.658610 kubelet[2862]: E0119 12:10:19.657623 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:10:20.004000 audit[5824]: NETFILTER_CFG table=filter:136 family=2 entries=14 op=nft_register_rule pid=5824 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:10:20.004000 audit[5824]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcacb285a0 a2=0 a3=7ffcacb2858c items=0 ppid=3029 pid=5824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:20.004000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:10:20.035000 audit[5824]: NETFILTER_CFG table=nat:137 family=2 entries=56 op=nft_register_chain pid=5824 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:10:20.035000 audit[5824]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffcacb285a0 a2=0 a3=7ffcacb2858c items=0 ppid=3029 pid=5824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:20.035000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:10:20.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.55:22-10.0.0.1:33050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:20.245702 systemd[1]: Started sshd@12-10.0.0.55:22-10.0.0.1:33050.service - OpenSSH per-connection server daemon (10.0.0.1:33050). Jan 19 12:10:20.494000 audit[5827]: USER_ACCT pid=5827 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:20.496763 sshd[5827]: Accepted publickey for core from 10.0.0.1 port 33050 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:10:20.498000 audit[5827]: CRED_ACQ pid=5827 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:20.498000 audit[5827]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd14c419f0 a2=3 a3=0 items=0 ppid=1 pid=5827 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:20.498000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:10:20.501532 sshd-session[5827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:10:20.514885 systemd-logind[1598]: New session 14 of user core. Jan 19 12:10:20.521700 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 19 12:10:20.527000 audit[5827]: USER_START pid=5827 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:20.531000 audit[5831]: CRED_ACQ pid=5831 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:20.664480 kubelet[2862]: E0119 12:10:20.663757 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:10:20.743997 sshd[5831]: Connection closed by 10.0.0.1 port 33050 Jan 19 12:10:20.744823 sshd-session[5827]: pam_unix(sshd:session): session closed for user core Jan 19 12:10:20.752000 audit[5827]: USER_END pid=5827 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:20.753000 audit[5827]: CRED_DISP pid=5827 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:20.758721 systemd[1]: sshd@12-10.0.0.55:22-10.0.0.1:33050.service: Deactivated successfully. Jan 19 12:10:20.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.55:22-10.0.0.1:33050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:20.762958 systemd[1]: session-14.scope: Deactivated successfully. Jan 19 12:10:20.765478 systemd-logind[1598]: Session 14 logged out. Waiting for processes to exit. Jan 19 12:10:20.768842 systemd-logind[1598]: Removed session 14. Jan 19 12:10:21.439898 containerd[1618]: time="2026-01-19T12:10:21.438876075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 19 12:10:21.548290 containerd[1618]: time="2026-01-19T12:10:21.548004530Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:10:21.551608 containerd[1618]: time="2026-01-19T12:10:21.551573392Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 19 12:10:21.552057 containerd[1618]: time="2026-01-19T12:10:21.551815955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 19 12:10:21.552530 kubelet[2862]: E0119 12:10:21.552321 2862 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 19 12:10:21.552530 kubelet[2862]: E0119 12:10:21.552480 2862 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 19 12:10:21.552636 kubelet[2862]: E0119 12:10:21.552552 2862 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-sh4c8_calico-system(fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 19 12:10:21.556287 containerd[1618]: time="2026-01-19T12:10:21.555988952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 19 12:10:21.650543 containerd[1618]: time="2026-01-19T12:10:21.649961148Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:10:21.653441 containerd[1618]: time="2026-01-19T12:10:21.652941875Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 19 12:10:21.653441 containerd[1618]: time="2026-01-19T12:10:21.653081290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 19 12:10:21.653754 kubelet[2862]: E0119 12:10:21.653600 2862 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 19 12:10:21.653754 kubelet[2862]: E0119 12:10:21.653643 2862 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 19 12:10:21.653754 kubelet[2862]: E0119 12:10:21.653707 2862 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-sh4c8_calico-system(fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 19 12:10:21.653754 kubelet[2862]: E0119 12:10:21.653743 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sh4c8" podUID="fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5" Jan 19 12:10:21.668516 kubelet[2862]: E0119 12:10:21.668268 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:10:23.430353 containerd[1618]: time="2026-01-19T12:10:23.430065452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 19 12:10:23.504985 containerd[1618]: time="2026-01-19T12:10:23.504693739Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:10:23.507815 containerd[1618]: time="2026-01-19T12:10:23.507761689Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 19 12:10:23.507868 containerd[1618]: time="2026-01-19T12:10:23.507832391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 19 12:10:23.508791 kubelet[2862]: E0119 12:10:23.508303 2862 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 19 12:10:23.508791 kubelet[2862]: E0119 12:10:23.508345 2862 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 19 12:10:23.508791 kubelet[2862]: E0119 12:10:23.508661 2862 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6c46bc687f-z2lmj_calico-system(acf44a01-9bd4-43fa-8dda-bec90148f2fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 19 12:10:23.512078 containerd[1618]: time="2026-01-19T12:10:23.511852769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 19 12:10:23.579876 containerd[1618]: time="2026-01-19T12:10:23.579632825Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:10:23.582792 containerd[1618]: time="2026-01-19T12:10:23.582518512Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 19 12:10:23.582792 containerd[1618]: time="2026-01-19T12:10:23.582609862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 19 12:10:23.583232 kubelet[2862]: E0119 12:10:23.582930 2862 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 19 12:10:23.583232 kubelet[2862]: E0119 12:10:23.582982 2862 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 19 12:10:23.583232 kubelet[2862]: E0119 12:10:23.583064 2862 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6c46bc687f-z2lmj_calico-system(acf44a01-9bd4-43fa-8dda-bec90148f2fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 19 12:10:23.583358 kubelet[2862]: E0119 12:10:23.583301 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c46bc687f-z2lmj" podUID="acf44a01-9bd4-43fa-8dda-bec90148f2fd" Jan 19 12:10:24.574960 update_engine[1599]: I20260119 12:10:24.574808 1599 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 19 12:10:24.574960 update_engine[1599]: I20260119 12:10:24.574950 1599 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 19 12:10:24.578877 update_engine[1599]: I20260119 12:10:24.578689 1599 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 19 12:10:24.579934 update_engine[1599]: I20260119 12:10:24.579900 1599 omaha_request_params.cc:62] Current group set to developer Jan 19 12:10:24.580762 update_engine[1599]: I20260119 12:10:24.580627 1599 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 19 12:10:24.580762 update_engine[1599]: I20260119 12:10:24.580642 1599 update_attempter.cc:643] Scheduling an action processor start. Jan 19 12:10:24.580762 update_engine[1599]: I20260119 12:10:24.580663 1599 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 19 12:10:24.580762 update_engine[1599]: I20260119 12:10:24.580709 1599 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 19 12:10:24.580895 update_engine[1599]: I20260119 12:10:24.580767 1599 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 19 12:10:24.580895 update_engine[1599]: I20260119 12:10:24.580776 1599 omaha_request_action.cc:272] Request: Jan 19 12:10:24.580895 update_engine[1599]: Jan 19 12:10:24.580895 update_engine[1599]: Jan 19 12:10:24.580895 update_engine[1599]: Jan 19 12:10:24.580895 update_engine[1599]: Jan 19 12:10:24.580895 update_engine[1599]: Jan 19 12:10:24.580895 update_engine[1599]: Jan 19 12:10:24.580895 update_engine[1599]: Jan 19 12:10:24.580895 update_engine[1599]: Jan 19 12:10:24.580895 update_engine[1599]: I20260119 12:10:24.580785 1599 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 19 12:10:24.589595 update_engine[1599]: I20260119 12:10:24.589354 1599 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 19 12:10:24.592046 update_engine[1599]: I20260119 12:10:24.592000 1599 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 19 12:10:24.620017 locksmithd[1654]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 19 12:10:24.625868 update_engine[1599]: E20260119 12:10:24.625718 1599 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 19 12:10:24.625998 update_engine[1599]: I20260119 12:10:24.625917 1599 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 19 12:10:25.433774 containerd[1618]: time="2026-01-19T12:10:25.432701965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 19 12:10:25.517077 containerd[1618]: time="2026-01-19T12:10:25.516931772Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:10:25.520627 containerd[1618]: time="2026-01-19T12:10:25.520489657Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 19 12:10:25.520693 containerd[1618]: time="2026-01-19T12:10:25.520541502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 19 12:10:25.521773 kubelet[2862]: E0119 12:10:25.521054 2862 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 19 12:10:25.521773 kubelet[2862]: E0119 12:10:25.521295 2862 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 19 12:10:25.521773 kubelet[2862]: E0119 12:10:25.521616 2862 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-9qp6h_calico-system(c862d411-4d4f-4a97-b967-49e1eb15851d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 19 12:10:25.521773 kubelet[2862]: E0119 12:10:25.521656 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9qp6h" podUID="c862d411-4d4f-4a97-b967-49e1eb15851d" Jan 19 12:10:25.524365 containerd[1618]: time="2026-01-19T12:10:25.521694778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 19 12:10:25.593524 containerd[1618]: time="2026-01-19T12:10:25.592946705Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:10:25.598302 containerd[1618]: time="2026-01-19T12:10:25.597330825Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 19 12:10:25.598302 containerd[1618]: time="2026-01-19T12:10:25.597771859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 19 12:10:25.598711 kubelet[2862]: E0119 12:10:25.598661 2862 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 19 12:10:25.598711 kubelet[2862]: E0119 12:10:25.598698 2862 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 19 12:10:25.599052 kubelet[2862]: E0119 12:10:25.598850 2862 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-975cd56bc-wkczn_calico-system(75367eb7-d5a3-4610-be00-cbd5e7d7db9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 19 12:10:25.599052 kubelet[2862]: E0119 12:10:25.598881 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-975cd56bc-wkczn" podUID="75367eb7-d5a3-4610-be00-cbd5e7d7db9d" Jan 19 12:10:25.600759 containerd[1618]: time="2026-01-19T12:10:25.600327432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 19 12:10:25.712483 containerd[1618]: time="2026-01-19T12:10:25.711799312Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:10:25.719500 containerd[1618]: time="2026-01-19T12:10:25.719264013Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 19 12:10:25.719500 containerd[1618]: time="2026-01-19T12:10:25.719327100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 19 12:10:25.720593 kubelet[2862]: E0119 12:10:25.720023 2862 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:10:25.720593 kubelet[2862]: E0119 12:10:25.720353 2862 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:10:25.720593 kubelet[2862]: E0119 12:10:25.720502 2862 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-595df97b5c-6lb2q_calico-apiserver(6b9860e0-5e66-444d-bc4f-e74b59e19721): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 19 12:10:25.720593 kubelet[2862]: E0119 12:10:25.720535 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-595df97b5c-6lb2q" podUID="6b9860e0-5e66-444d-bc4f-e74b59e19721" Jan 19 12:10:25.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.55:22-10.0.0.1:39840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:25.780667 systemd[1]: Started sshd@13-10.0.0.55:22-10.0.0.1:39840.service - OpenSSH per-connection server daemon (10.0.0.1:39840). Jan 19 12:10:25.807334 kernel: kauditd_printk_skb: 66 callbacks suppressed Jan 19 12:10:25.807557 kernel: audit: type=1130 audit(1768824625.779:792): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.55:22-10.0.0.1:39840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:25.983000 audit[5858]: USER_ACCT pid=5858 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:25.986024 sshd[5858]: Accepted publickey for core from 10.0.0.1 port 39840 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:10:25.991522 sshd-session[5858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:10:26.008525 systemd-logind[1598]: New session 15 of user core. Jan 19 12:10:26.021914 kernel: audit: type=1101 audit(1768824625.983:793): pid=5858 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:26.022001 kernel: audit: type=1103 audit(1768824625.986:794): pid=5858 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:25.986000 audit[5858]: CRED_ACQ pid=5858 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:26.073461 kernel: audit: type=1006 audit(1768824625.986:795): pid=5858 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 19 12:10:26.073537 kernel: audit: type=1300 audit(1768824625.986:795): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7621dee0 a2=3 a3=0 items=0 ppid=1 pid=5858 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:25.986000 audit[5858]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7621dee0 a2=3 a3=0 items=0 ppid=1 pid=5858 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:26.111082 kernel: audit: type=1327 audit(1768824625.986:795): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:10:25.986000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:10:26.111860 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 19 12:10:26.127964 kernel: audit: type=1105 audit(1768824626.119:796): pid=5858 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:26.119000 audit[5858]: USER_START pid=5858 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:26.207299 kernel: audit: type=1103 audit(1768824626.123:797): pid=5862 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:26.123000 audit[5862]: CRED_ACQ pid=5862 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:26.458917 sshd[5862]: Connection closed by 10.0.0.1 port 39840 Jan 19 12:10:26.460695 sshd-session[5858]: pam_unix(sshd:session): session closed for user core Jan 19 12:10:26.469000 audit[5858]: USER_END pid=5858 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:26.478697 systemd[1]: sshd@13-10.0.0.55:22-10.0.0.1:39840.service: Deactivated successfully. Jan 19 12:10:26.492580 systemd[1]: session-15.scope: Deactivated successfully. Jan 19 12:10:26.496912 systemd-logind[1598]: Session 15 logged out. Waiting for processes to exit. Jan 19 12:10:26.502014 systemd-logind[1598]: Removed session 15. Jan 19 12:10:26.517587 kernel: audit: type=1106 audit(1768824626.469:798): pid=5858 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:26.517651 kernel: audit: type=1104 audit(1768824626.469:799): pid=5858 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:26.469000 audit[5858]: CRED_DISP pid=5858 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:26.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.55:22-10.0.0.1:39840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:31.433650 containerd[1618]: time="2026-01-19T12:10:31.432702128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 19 12:10:31.488880 systemd[1]: Started sshd@14-10.0.0.55:22-10.0.0.1:39842.service - OpenSSH per-connection server daemon (10.0.0.1:39842). Jan 19 12:10:31.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.55:22-10.0.0.1:39842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:31.497330 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 19 12:10:31.497495 kernel: audit: type=1130 audit(1768824631.487:801): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.55:22-10.0.0.1:39842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:31.524735 containerd[1618]: time="2026-01-19T12:10:31.524502920Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:10:31.527901 containerd[1618]: time="2026-01-19T12:10:31.527665296Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 19 12:10:31.527901 containerd[1618]: time="2026-01-19T12:10:31.527882091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 19 12:10:31.529015 kubelet[2862]: E0119 12:10:31.528644 2862 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:10:31.529015 kubelet[2862]: E0119 12:10:31.528799 2862 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:10:31.529847 kubelet[2862]: E0119 12:10:31.529291 2862 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-595df97b5c-7r9mw_calico-apiserver(57042a5e-1534-485c-abeb-75f1e57f8cf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 19 12:10:31.529847 kubelet[2862]: E0119 12:10:31.529336 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-595df97b5c-7r9mw" podUID="57042a5e-1534-485c-abeb-75f1e57f8cf0" Jan 19 12:10:31.631054 sshd[5882]: Accepted publickey for core from 10.0.0.1 port 39842 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:10:31.629000 audit[5882]: USER_ACCT pid=5882 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:31.637714 sshd-session[5882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:10:31.652684 systemd-logind[1598]: New session 16 of user core. Jan 19 12:10:31.633000 audit[5882]: CRED_ACQ pid=5882 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:31.698881 kernel: audit: type=1101 audit(1768824631.629:802): pid=5882 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:31.698932 kernel: audit: type=1103 audit(1768824631.633:803): pid=5882 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:31.719030 kernel: audit: type=1006 audit(1768824631.633:804): pid=5882 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 19 12:10:31.719613 kernel: audit: type=1300 audit(1768824631.633:804): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda468af60 a2=3 a3=0 items=0 ppid=1 pid=5882 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:31.633000 audit[5882]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda468af60 a2=3 a3=0 items=0 ppid=1 pid=5882 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:31.633000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:10:31.768681 kernel: audit: type=1327 audit(1768824631.633:804): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:10:31.770957 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 19 12:10:31.779000 audit[5882]: USER_START pid=5882 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:31.782000 audit[5886]: CRED_ACQ pid=5886 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:31.851730 kernel: audit: type=1105 audit(1768824631.779:805): pid=5882 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:31.851845 kernel: audit: type=1103 audit(1768824631.782:806): pid=5886 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:32.114859 sshd[5886]: Connection closed by 10.0.0.1 port 39842 Jan 19 12:10:32.110085 sshd-session[5882]: pam_unix(sshd:session): session closed for user core Jan 19 12:10:32.145000 audit[5882]: USER_END pid=5882 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:32.151706 systemd[1]: sshd@14-10.0.0.55:22-10.0.0.1:39842.service: Deactivated successfully. Jan 19 12:10:32.156080 systemd[1]: session-16.scope: Deactivated successfully. Jan 19 12:10:32.160986 systemd-logind[1598]: Session 16 logged out. Waiting for processes to exit. Jan 19 12:10:32.164793 systemd-logind[1598]: Removed session 16. Jan 19 12:10:32.145000 audit[5882]: CRED_DISP pid=5882 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:32.195499 kernel: audit: type=1106 audit(1768824632.145:807): pid=5882 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:32.195568 kernel: audit: type=1104 audit(1768824632.145:808): pid=5882 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:32.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.55:22-10.0.0.1:39842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:32.427830 kubelet[2862]: E0119 12:10:32.427635 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sh4c8" podUID="fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5" Jan 19 12:10:33.183785 kubelet[2862]: E0119 12:10:33.183646 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:10:34.424752 kubelet[2862]: E0119 12:10:34.424663 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:10:34.428926 kubelet[2862]: E0119 12:10:34.428531 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c46bc687f-z2lmj" podUID="acf44a01-9bd4-43fa-8dda-bec90148f2fd" Jan 19 12:10:34.512384 update_engine[1599]: I20260119 12:10:34.511814 1599 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 19 12:10:34.512384 update_engine[1599]: I20260119 12:10:34.512042 1599 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 19 12:10:34.514831 update_engine[1599]: I20260119 12:10:34.514686 1599 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 19 12:10:34.549808 update_engine[1599]: E20260119 12:10:34.549556 1599 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 19 12:10:34.549808 update_engine[1599]: I20260119 12:10:34.549760 1599 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 19 12:10:36.429052 kubelet[2862]: E0119 12:10:36.428814 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9qp6h" podUID="c862d411-4d4f-4a97-b967-49e1eb15851d" Jan 19 12:10:37.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.55:22-10.0.0.1:33834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:37.127994 systemd[1]: Started sshd@15-10.0.0.55:22-10.0.0.1:33834.service - OpenSSH per-connection server daemon (10.0.0.1:33834). Jan 19 12:10:37.136957 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 19 12:10:37.137021 kernel: audit: type=1130 audit(1768824637.127:810): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.55:22-10.0.0.1:33834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:37.296000 audit[5937]: USER_ACCT pid=5937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:37.299922 sshd[5937]: Accepted publickey for core from 10.0.0.1 port 33834 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:10:37.301389 sshd-session[5937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:10:37.327676 systemd-logind[1598]: New session 17 of user core. Jan 19 12:10:37.298000 audit[5937]: CRED_ACQ pid=5937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:37.377983 kernel: audit: type=1101 audit(1768824637.296:811): pid=5937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:37.378319 kernel: audit: type=1103 audit(1768824637.298:812): pid=5937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:37.378379 kernel: audit: type=1006 audit(1768824637.298:813): pid=5937 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 19 12:10:37.298000 audit[5937]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0d496960 a2=3 a3=0 items=0 ppid=1 pid=5937 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:37.456597 kernel: audit: type=1300 audit(1768824637.298:813): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0d496960 a2=3 a3=0 items=0 ppid=1 pid=5937 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:37.456713 kernel: audit: type=1327 audit(1768824637.298:813): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:10:37.298000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:10:37.478727 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 19 12:10:37.488000 audit[5937]: USER_START pid=5937 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:37.493000 audit[5941]: CRED_ACQ pid=5941 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:37.543374 kernel: audit: type=1105 audit(1768824637.488:814): pid=5937 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:37.543540 kernel: audit: type=1103 audit(1768824637.493:815): pid=5941 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:37.805534 sshd[5941]: Connection closed by 10.0.0.1 port 33834 Jan 19 12:10:37.804882 sshd-session[5937]: pam_unix(sshd:session): session closed for user core Jan 19 12:10:37.812000 audit[5937]: USER_END pid=5937 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:37.812000 audit[5937]: CRED_DISP pid=5937 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:37.907904 kernel: audit: type=1106 audit(1768824637.812:816): pid=5937 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:37.907990 kernel: audit: type=1104 audit(1768824637.812:817): pid=5937 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:37.915084 systemd[1]: sshd@15-10.0.0.55:22-10.0.0.1:33834.service: Deactivated successfully. Jan 19 12:10:37.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.55:22-10.0.0.1:33834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:37.918978 systemd[1]: session-17.scope: Deactivated successfully. Jan 19 12:10:37.924007 systemd-logind[1598]: Session 17 logged out. Waiting for processes to exit. Jan 19 12:10:37.929610 systemd[1]: Started sshd@16-10.0.0.55:22-10.0.0.1:33840.service - OpenSSH per-connection server daemon (10.0.0.1:33840). Jan 19 12:10:37.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.55:22-10.0.0.1:33840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:37.934528 systemd-logind[1598]: Removed session 17. Jan 19 12:10:38.069000 audit[5954]: USER_ACCT pid=5954 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:38.072624 sshd[5954]: Accepted publickey for core from 10.0.0.1 port 33840 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:10:38.072000 audit[5954]: CRED_ACQ pid=5954 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:38.073000 audit[5954]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff420ca310 a2=3 a3=0 items=0 ppid=1 pid=5954 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:38.073000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:10:38.076056 sshd-session[5954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:10:38.094687 systemd-logind[1598]: New session 18 of user core. Jan 19 12:10:38.112560 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 19 12:10:38.119000 audit[5954]: USER_START pid=5954 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:38.124000 audit[5958]: CRED_ACQ pid=5958 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:38.427082 kubelet[2862]: E0119 12:10:38.426709 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-975cd56bc-wkczn" podUID="75367eb7-d5a3-4610-be00-cbd5e7d7db9d" Jan 19 12:10:38.726033 sshd[5958]: Connection closed by 10.0.0.1 port 33840 Jan 19 12:10:38.726026 sshd-session[5954]: pam_unix(sshd:session): session closed for user core Jan 19 12:10:38.738695 systemd[1]: Started sshd@17-10.0.0.55:22-10.0.0.1:33856.service - OpenSSH per-connection server daemon (10.0.0.1:33856). Jan 19 12:10:38.735000 audit[5954]: USER_END pid=5954 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:38.735000 audit[5954]: CRED_DISP pid=5954 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:38.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.55:22-10.0.0.1:33856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:38.741948 systemd-logind[1598]: Session 18 logged out. Waiting for processes to exit. Jan 19 12:10:38.743053 systemd[1]: sshd@16-10.0.0.55:22-10.0.0.1:33840.service: Deactivated successfully. Jan 19 12:10:38.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.55:22-10.0.0.1:33840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:38.749988 systemd[1]: session-18.scope: Deactivated successfully. Jan 19 12:10:38.755829 systemd-logind[1598]: Removed session 18. Jan 19 12:10:38.946000 audit[5969]: USER_ACCT pid=5969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:38.948249 sshd[5969]: Accepted publickey for core from 10.0.0.1 port 33856 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:10:38.949000 audit[5969]: CRED_ACQ pid=5969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:38.949000 audit[5969]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5d4bce90 a2=3 a3=0 items=0 ppid=1 pid=5969 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:38.949000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:10:38.952909 sshd-session[5969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:10:38.970393 systemd-logind[1598]: New session 19 of user core. Jan 19 12:10:38.976687 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 19 12:10:38.987000 audit[5969]: USER_START pid=5969 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:38.992000 audit[5976]: CRED_ACQ pid=5976 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:39.441628 kubelet[2862]: E0119 12:10:39.440624 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-595df97b5c-6lb2q" podUID="6b9860e0-5e66-444d-bc4f-e74b59e19721" Jan 19 12:10:40.351346 sshd[5976]: Connection closed by 10.0.0.1 port 33856 Jan 19 12:10:40.355006 sshd-session[5969]: pam_unix(sshd:session): session closed for user core Jan 19 12:10:40.358000 audit[5969]: USER_END pid=5969 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:40.359000 audit[5969]: CRED_DISP pid=5969 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:40.373681 systemd[1]: sshd@17-10.0.0.55:22-10.0.0.1:33856.service: Deactivated successfully. Jan 19 12:10:40.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.55:22-10.0.0.1:33856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:40.380864 systemd[1]: session-19.scope: Deactivated successfully. Jan 19 12:10:40.382896 systemd[1]: session-19.scope: Consumed 1.249s CPU time, 41.7M memory peak. Jan 19 12:10:40.389586 systemd-logind[1598]: Session 19 logged out. Waiting for processes to exit. Jan 19 12:10:40.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.55:22-10.0.0.1:33872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:40.397393 systemd[1]: Started sshd@18-10.0.0.55:22-10.0.0.1:33872.service - OpenSSH per-connection server daemon (10.0.0.1:33872). Jan 19 12:10:40.403902 systemd-logind[1598]: Removed session 19. Jan 19 12:10:40.417000 audit[5992]: NETFILTER_CFG table=filter:138 family=2 entries=26 op=nft_register_rule pid=5992 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:10:40.417000 audit[5992]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fffd8ff9640 a2=0 a3=7fffd8ff962c items=0 ppid=3029 pid=5992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:40.417000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:10:40.430000 audit[5992]: NETFILTER_CFG table=nat:139 family=2 entries=20 op=nft_register_rule pid=5992 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:10:40.430000 audit[5992]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffd8ff9640 a2=0 a3=0 items=0 ppid=3029 pid=5992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:40.430000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:10:40.567000 audit[5997]: USER_ACCT pid=5997 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:40.569823 sshd[5997]: Accepted publickey for core from 10.0.0.1 port 33872 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:10:40.571000 audit[5997]: CRED_ACQ pid=5997 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:40.571000 audit[5997]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeca0c5c30 a2=3 a3=0 items=0 ppid=1 pid=5997 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:40.571000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:10:40.574773 sshd-session[5997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:10:40.594823 systemd-logind[1598]: New session 20 of user core. Jan 19 12:10:40.610769 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 19 12:10:40.618000 audit[5997]: USER_START pid=5997 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:40.624000 audit[6001]: CRED_ACQ pid=6001 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:41.284834 sshd[6001]: Connection closed by 10.0.0.1 port 33872 Jan 19 12:10:41.286540 sshd-session[5997]: pam_unix(sshd:session): session closed for user core Jan 19 12:10:41.287000 audit[5997]: USER_END pid=5997 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:41.288000 audit[5997]: CRED_DISP pid=5997 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:41.303688 systemd[1]: sshd@18-10.0.0.55:22-10.0.0.1:33872.service: Deactivated successfully. Jan 19 12:10:41.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.55:22-10.0.0.1:33872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:41.306840 systemd[1]: session-20.scope: Deactivated successfully. Jan 19 12:10:41.311701 systemd-logind[1598]: Session 20 logged out. Waiting for processes to exit. Jan 19 12:10:41.323542 systemd[1]: Started sshd@19-10.0.0.55:22-10.0.0.1:33876.service - OpenSSH per-connection server daemon (10.0.0.1:33876). Jan 19 12:10:41.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.55:22-10.0.0.1:33876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:41.329854 systemd-logind[1598]: Removed session 20. Jan 19 12:10:41.513000 audit[6013]: USER_ACCT pid=6013 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:41.515022 sshd[6013]: Accepted publickey for core from 10.0.0.1 port 33876 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:10:41.516000 audit[6013]: CRED_ACQ pid=6013 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:41.516000 audit[6013]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd8958c5c0 a2=3 a3=0 items=0 ppid=1 pid=6013 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:41.516000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:10:41.520608 sshd-session[6013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:10:41.534848 systemd-logind[1598]: New session 21 of user core. Jan 19 12:10:41.539000 audit[6017]: NETFILTER_CFG table=filter:140 family=2 entries=38 op=nft_register_rule pid=6017 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:10:41.539000 audit[6017]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe55776e60 a2=0 a3=7ffe55776e4c items=0 ppid=3029 pid=6017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:41.539000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:10:41.541729 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 19 12:10:41.545000 audit[6017]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=6017 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:10:41.545000 audit[6017]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe55776e60 a2=0 a3=0 items=0 ppid=3029 pid=6017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:41.545000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:10:41.549000 audit[6013]: USER_START pid=6013 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:41.555000 audit[6019]: CRED_ACQ pid=6019 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:41.912393 sshd[6019]: Connection closed by 10.0.0.1 port 33876 Jan 19 12:10:41.914861 sshd-session[6013]: pam_unix(sshd:session): session closed for user core Jan 19 12:10:41.923000 audit[6013]: USER_END pid=6013 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:41.923000 audit[6013]: CRED_DISP pid=6013 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:41.931622 systemd[1]: sshd@19-10.0.0.55:22-10.0.0.1:33876.service: Deactivated successfully. Jan 19 12:10:41.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.55:22-10.0.0.1:33876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:41.936838 systemd[1]: session-21.scope: Deactivated successfully. Jan 19 12:10:41.944046 systemd-logind[1598]: Session 21 logged out. Waiting for processes to exit. Jan 19 12:10:41.948002 systemd-logind[1598]: Removed session 21. Jan 19 12:10:44.511615 update_engine[1599]: I20260119 12:10:44.510541 1599 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 19 12:10:44.511615 update_engine[1599]: I20260119 12:10:44.510653 1599 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 19 12:10:44.512706 update_engine[1599]: I20260119 12:10:44.511858 1599 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 19 12:10:44.533612 update_engine[1599]: E20260119 12:10:44.533012 1599 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 19 12:10:44.533612 update_engine[1599]: I20260119 12:10:44.533593 1599 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 19 12:10:45.477537 kubelet[2862]: E0119 12:10:45.475890 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-595df97b5c-7r9mw" podUID="57042a5e-1534-485c-abeb-75f1e57f8cf0" Jan 19 12:10:45.480320 containerd[1618]: time="2026-01-19T12:10:45.478794949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 19 12:10:45.573828 containerd[1618]: time="2026-01-19T12:10:45.573401222Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:10:45.578712 containerd[1618]: time="2026-01-19T12:10:45.577789737Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 19 12:10:45.578712 containerd[1618]: time="2026-01-19T12:10:45.578025728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 19 12:10:45.581773 kubelet[2862]: E0119 12:10:45.581724 2862 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 19 12:10:45.581773 kubelet[2862]: E0119 12:10:45.581769 2862 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 19 12:10:45.582601 kubelet[2862]: E0119 12:10:45.581979 2862 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-sh4c8_calico-system(fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 19 12:10:45.584335 containerd[1618]: time="2026-01-19T12:10:45.583812304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 19 12:10:45.677807 containerd[1618]: time="2026-01-19T12:10:45.677543809Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:10:45.684759 containerd[1618]: time="2026-01-19T12:10:45.684337621Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 19 12:10:45.684759 containerd[1618]: time="2026-01-19T12:10:45.684566509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 19 12:10:45.684892 kubelet[2862]: E0119 12:10:45.684752 2862 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 19 12:10:45.684892 kubelet[2862]: E0119 12:10:45.684805 2862 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 19 12:10:45.685696 kubelet[2862]: E0119 12:10:45.684993 2862 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6c46bc687f-z2lmj_calico-system(acf44a01-9bd4-43fa-8dda-bec90148f2fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 19 12:10:45.686681 containerd[1618]: time="2026-01-19T12:10:45.686659667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 19 12:10:45.767879 containerd[1618]: time="2026-01-19T12:10:45.767744150Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:10:45.781589 containerd[1618]: time="2026-01-19T12:10:45.778703519Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 19 12:10:45.781589 containerd[1618]: time="2026-01-19T12:10:45.778818805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 19 12:10:45.781755 kubelet[2862]: E0119 12:10:45.780713 2862 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 19 12:10:45.781755 kubelet[2862]: E0119 12:10:45.780770 2862 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 19 12:10:45.781755 kubelet[2862]: E0119 12:10:45.780962 2862 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-sh4c8_calico-system(fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 19 12:10:45.781755 kubelet[2862]: E0119 12:10:45.781015 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sh4c8" podUID="fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5" Jan 19 12:10:45.786852 containerd[1618]: time="2026-01-19T12:10:45.783672224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 19 12:10:45.875985 containerd[1618]: time="2026-01-19T12:10:45.875673274Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:10:45.881795 containerd[1618]: time="2026-01-19T12:10:45.881534487Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 19 12:10:45.881795 containerd[1618]: time="2026-01-19T12:10:45.881740482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 19 12:10:45.884913 kubelet[2862]: E0119 12:10:45.883980 2862 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 19 12:10:45.884913 kubelet[2862]: E0119 12:10:45.884032 2862 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 19 12:10:45.884913 kubelet[2862]: E0119 12:10:45.884335 2862 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6c46bc687f-z2lmj_calico-system(acf44a01-9bd4-43fa-8dda-bec90148f2fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 19 12:10:45.884913 kubelet[2862]: E0119 12:10:45.884385 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c46bc687f-z2lmj" podUID="acf44a01-9bd4-43fa-8dda-bec90148f2fd" Jan 19 12:10:46.427573 kubelet[2862]: E0119 12:10:46.426977 2862 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:10:46.951961 systemd[1]: Started sshd@20-10.0.0.55:22-10.0.0.1:49072.service - OpenSSH per-connection server daemon (10.0.0.1:49072). Jan 19 12:10:46.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.55:22-10.0.0.1:49072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:47.000764 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 19 12:10:47.000895 kernel: audit: type=1130 audit(1768824646.949:859): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.55:22-10.0.0.1:49072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:47.153000 audit[6032]: USER_ACCT pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:47.155847 sshd[6032]: Accepted publickey for core from 10.0.0.1 port 49072 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:10:47.161874 sshd-session[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:10:47.177857 systemd-logind[1598]: New session 22 of user core. Jan 19 12:10:47.157000 audit[6032]: CRED_ACQ pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:47.247852 kernel: audit: type=1101 audit(1768824647.153:860): pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:47.248000 kernel: audit: type=1103 audit(1768824647.157:861): pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:47.248027 kernel: audit: type=1006 audit(1768824647.157:862): pid=6032 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 19 12:10:47.277572 kernel: audit: type=1300 audit(1768824647.157:862): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd2648bda0 a2=3 a3=0 items=0 ppid=1 pid=6032 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:47.157000 audit[6032]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd2648bda0 a2=3 a3=0 items=0 ppid=1 pid=6032 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:47.157000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:10:47.344776 kernel: audit: type=1327 audit(1768824647.157:862): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:10:47.347044 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 19 12:10:47.356000 audit[6032]: USER_START pid=6032 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:47.413341 kernel: audit: type=1105 audit(1768824647.356:863): pid=6032 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:47.413592 kernel: audit: type=1103 audit(1768824647.357:864): pid=6036 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:47.357000 audit[6036]: CRED_ACQ pid=6036 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:47.665888 sshd[6036]: Connection closed by 10.0.0.1 port 49072 Jan 19 12:10:47.667673 sshd-session[6032]: pam_unix(sshd:session): session closed for user core Jan 19 12:10:47.671000 audit[6032]: USER_END pid=6032 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:47.679723 systemd[1]: sshd@20-10.0.0.55:22-10.0.0.1:49072.service: Deactivated successfully. Jan 19 12:10:47.687653 systemd[1]: session-22.scope: Deactivated successfully. Jan 19 12:10:47.690762 systemd-logind[1598]: Session 22 logged out. Waiting for processes to exit. Jan 19 12:10:47.696690 systemd-logind[1598]: Removed session 22. Jan 19 12:10:47.673000 audit[6032]: CRED_DISP pid=6032 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:47.763906 kernel: audit: type=1106 audit(1768824647.671:865): pid=6032 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:47.764033 kernel: audit: type=1104 audit(1768824647.673:866): pid=6032 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:47.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.55:22-10.0.0.1:49072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:48.429346 containerd[1618]: time="2026-01-19T12:10:48.429060972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 19 12:10:48.504372 containerd[1618]: time="2026-01-19T12:10:48.503845422Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:10:48.507548 containerd[1618]: time="2026-01-19T12:10:48.506634509Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 19 12:10:48.507548 containerd[1618]: time="2026-01-19T12:10:48.506818573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 19 12:10:48.507637 kubelet[2862]: E0119 12:10:48.507025 2862 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 19 12:10:48.507637 kubelet[2862]: E0119 12:10:48.507066 2862 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 19 12:10:48.507637 kubelet[2862]: E0119 12:10:48.507370 2862 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-9qp6h_calico-system(c862d411-4d4f-4a97-b967-49e1eb15851d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 19 12:10:48.507637 kubelet[2862]: E0119 12:10:48.507401 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9qp6h" podUID="c862d411-4d4f-4a97-b967-49e1eb15851d" Jan 19 12:10:50.242000 audit[6052]: NETFILTER_CFG table=filter:142 family=2 entries=26 op=nft_register_rule pid=6052 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:10:50.242000 audit[6052]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd96e38d40 a2=0 a3=7ffd96e38d2c items=0 ppid=3029 pid=6052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:50.242000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:10:50.263000 audit[6052]: NETFILTER_CFG table=nat:143 family=2 entries=104 op=nft_register_chain pid=6052 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:10:50.263000 audit[6052]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd96e38d40 a2=0 a3=7ffd96e38d2c items=0 ppid=3029 pid=6052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:50.263000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:10:50.437786 containerd[1618]: time="2026-01-19T12:10:50.437038452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 19 12:10:50.515801 containerd[1618]: time="2026-01-19T12:10:50.514670614Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:10:50.519082 containerd[1618]: time="2026-01-19T12:10:50.518937777Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 19 12:10:50.519082 containerd[1618]: time="2026-01-19T12:10:50.519039117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 19 12:10:50.521041 kubelet[2862]: E0119 12:10:50.520829 2862 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 19 12:10:50.521041 kubelet[2862]: E0119 12:10:50.520879 2862 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 19 12:10:50.521041 kubelet[2862]: E0119 12:10:50.520974 2862 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-975cd56bc-wkczn_calico-system(75367eb7-d5a3-4610-be00-cbd5e7d7db9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 19 12:10:50.521844 kubelet[2862]: E0119 12:10:50.521013 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-975cd56bc-wkczn" podUID="75367eb7-d5a3-4610-be00-cbd5e7d7db9d" Jan 19 12:10:52.429959 containerd[1618]: time="2026-01-19T12:10:52.429676355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 19 12:10:52.510001 containerd[1618]: time="2026-01-19T12:10:52.509963514Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:10:52.512862 containerd[1618]: time="2026-01-19T12:10:52.512827312Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 19 12:10:52.513586 containerd[1618]: time="2026-01-19T12:10:52.513340538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 19 12:10:52.517345 kubelet[2862]: E0119 12:10:52.515538 2862 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:10:52.517345 kubelet[2862]: E0119 12:10:52.515592 2862 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:10:52.517345 kubelet[2862]: E0119 12:10:52.515923 2862 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-595df97b5c-6lb2q_calico-apiserver(6b9860e0-5e66-444d-bc4f-e74b59e19721): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 19 12:10:52.517345 kubelet[2862]: E0119 12:10:52.515959 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-595df97b5c-6lb2q" podUID="6b9860e0-5e66-444d-bc4f-e74b59e19721" Jan 19 12:10:52.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.55:22-10.0.0.1:39230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:52.688060 systemd[1]: Started sshd@21-10.0.0.55:22-10.0.0.1:39230.service - OpenSSH per-connection server daemon (10.0.0.1:39230). Jan 19 12:10:52.719037 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 19 12:10:52.719393 kernel: audit: type=1130 audit(1768824652.687:870): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.55:22-10.0.0.1:39230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:52.871000 audit[6060]: USER_ACCT pid=6060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:52.872945 sshd[6060]: Accepted publickey for core from 10.0.0.1 port 39230 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:10:52.877668 sshd-session[6060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:10:52.895031 systemd-logind[1598]: New session 23 of user core. Jan 19 12:10:52.874000 audit[6060]: CRED_ACQ pid=6060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:52.963630 kernel: audit: type=1101 audit(1768824652.871:871): pid=6060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:52.963952 kernel: audit: type=1103 audit(1768824652.874:872): pid=6060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:52.966540 kernel: audit: type=1006 audit(1768824652.874:873): pid=6060 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 19 12:10:52.967653 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 19 12:10:52.874000 audit[6060]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb879e690 a2=3 a3=0 items=0 ppid=1 pid=6060 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:52.874000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:10:53.070393 kernel: audit: type=1300 audit(1768824652.874:873): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb879e690 a2=3 a3=0 items=0 ppid=1 pid=6060 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:53.070781 kernel: audit: type=1327 audit(1768824652.874:873): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:10:53.070814 kernel: audit: type=1105 audit(1768824652.979:874): pid=6060 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:52.979000 audit[6060]: USER_START pid=6060 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:52.987000 audit[6064]: CRED_ACQ pid=6064 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:53.168891 kernel: audit: type=1103 audit(1768824652.987:875): pid=6064 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:53.409033 sshd[6064]: Connection closed by 10.0.0.1 port 39230 Jan 19 12:10:53.413970 sshd-session[6060]: pam_unix(sshd:session): session closed for user core Jan 19 12:10:53.419000 audit[6060]: USER_END pid=6060 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:53.427043 systemd[1]: sshd@21-10.0.0.55:22-10.0.0.1:39230.service: Deactivated successfully. Jan 19 12:10:53.434963 systemd[1]: session-23.scope: Deactivated successfully. Jan 19 12:10:53.439716 systemd-logind[1598]: Session 23 logged out. Waiting for processes to exit. Jan 19 12:10:53.446628 systemd-logind[1598]: Removed session 23. Jan 19 12:10:53.476617 kernel: audit: type=1106 audit(1768824653.419:876): pid=6060 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:53.477868 kernel: audit: type=1104 audit(1768824653.419:877): pid=6060 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:53.419000 audit[6060]: CRED_DISP pid=6060 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:53.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.55:22-10.0.0.1:39230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:54.511619 update_engine[1599]: I20260119 12:10:54.510803 1599 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 19 12:10:54.511619 update_engine[1599]: I20260119 12:10:54.511022 1599 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 19 12:10:54.514797 update_engine[1599]: I20260119 12:10:54.514644 1599 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 19 12:10:54.530674 update_engine[1599]: E20260119 12:10:54.529908 1599 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 19 12:10:54.530806 update_engine[1599]: I20260119 12:10:54.530730 1599 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 19 12:10:54.530806 update_engine[1599]: I20260119 12:10:54.530754 1599 omaha_request_action.cc:617] Omaha request response: Jan 19 12:10:54.531340 update_engine[1599]: E20260119 12:10:54.530869 1599 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 19 12:10:54.531340 update_engine[1599]: I20260119 12:10:54.531044 1599 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 19 12:10:54.531340 update_engine[1599]: I20260119 12:10:54.531061 1599 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 19 12:10:54.531340 update_engine[1599]: I20260119 12:10:54.531071 1599 update_attempter.cc:306] Processing Done. Jan 19 12:10:54.531794 update_engine[1599]: E20260119 12:10:54.531605 1599 update_attempter.cc:619] Update failed. Jan 19 12:10:54.531794 update_engine[1599]: I20260119 12:10:54.531628 1599 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 19 12:10:54.531794 update_engine[1599]: I20260119 12:10:54.531641 1599 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 19 12:10:54.531794 update_engine[1599]: I20260119 12:10:54.531651 1599 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 19 12:10:54.532558 update_engine[1599]: I20260119 12:10:54.531869 1599 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 19 12:10:54.532558 update_engine[1599]: I20260119 12:10:54.532029 1599 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 19 12:10:54.532558 update_engine[1599]: I20260119 12:10:54.532047 1599 omaha_request_action.cc:272] Request: Jan 19 12:10:54.532558 update_engine[1599]: Jan 19 12:10:54.532558 update_engine[1599]: Jan 19 12:10:54.532558 update_engine[1599]: Jan 19 12:10:54.532558 update_engine[1599]: Jan 19 12:10:54.532558 update_engine[1599]: Jan 19 12:10:54.532558 update_engine[1599]: Jan 19 12:10:54.532558 update_engine[1599]: I20260119 12:10:54.532059 1599 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 19 12:10:54.532558 update_engine[1599]: I20260119 12:10:54.532348 1599 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 19 12:10:54.533987 update_engine[1599]: I20260119 12:10:54.533725 1599 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 19 12:10:54.534837 locksmithd[1654]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 19 12:10:54.551948 update_engine[1599]: E20260119 12:10:54.551622 1599 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 19 12:10:54.551948 update_engine[1599]: I20260119 12:10:54.551824 1599 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 19 12:10:54.551948 update_engine[1599]: I20260119 12:10:54.551838 1599 omaha_request_action.cc:617] Omaha request response: Jan 19 12:10:54.551948 update_engine[1599]: I20260119 12:10:54.551847 1599 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 19 12:10:54.551948 update_engine[1599]: I20260119 12:10:54.551853 1599 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 19 12:10:54.551948 update_engine[1599]: I20260119 12:10:54.551859 1599 update_attempter.cc:306] Processing Done. Jan 19 12:10:54.551948 update_engine[1599]: I20260119 12:10:54.551870 1599 update_attempter.cc:310] Error event sent. Jan 19 12:10:54.551948 update_engine[1599]: I20260119 12:10:54.551880 1599 update_check_scheduler.cc:74] Next update check in 46m32s Jan 19 12:10:54.553686 locksmithd[1654]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 19 12:10:58.439061 systemd[1]: Started sshd@22-10.0.0.55:22-10.0.0.1:39238.service - OpenSSH per-connection server daemon (10.0.0.1:39238). Jan 19 12:10:58.453568 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 19 12:10:58.453707 kernel: audit: type=1130 audit(1768824658.438:879): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.55:22-10.0.0.1:39238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:58.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.55:22-10.0.0.1:39238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:58.453861 kubelet[2862]: E0119 12:10:58.445827 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c46bc687f-z2lmj" podUID="acf44a01-9bd4-43fa-8dda-bec90148f2fd" Jan 19 12:10:58.463548 kubelet[2862]: E0119 12:10:58.462680 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sh4c8" podUID="fe7ed3bd-4172-4537-91b9-e5f33dbfd6b5" Jan 19 12:10:58.700000 audit[6079]: USER_ACCT pid=6079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:58.739682 sshd[6079]: Accepted publickey for core from 10.0.0.1 port 39238 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:10:58.749823 kernel: audit: type=1101 audit(1768824658.700:880): pid=6079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:58.751000 audit[6079]: CRED_ACQ pid=6079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:58.754350 sshd-session[6079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:10:58.797766 kernel: audit: type=1103 audit(1768824658.751:881): pid=6079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:58.810988 systemd-logind[1598]: New session 24 of user core. Jan 19 12:10:58.827629 kernel: audit: type=1006 audit(1768824658.751:882): pid=6079 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 19 12:10:58.751000 audit[6079]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3756d5f0 a2=3 a3=0 items=0 ppid=1 pid=6079 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:58.832799 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 19 12:10:58.881358 kernel: audit: type=1300 audit(1768824658.751:882): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3756d5f0 a2=3 a3=0 items=0 ppid=1 pid=6079 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:10:58.751000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:10:58.905696 kernel: audit: type=1327 audit(1768824658.751:882): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:10:58.906046 kernel: audit: type=1105 audit(1768824658.849:883): pid=6079 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:58.849000 audit[6079]: USER_START pid=6079 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:59.006603 kernel: audit: type=1103 audit(1768824658.880:884): pid=6083 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:58.880000 audit[6083]: CRED_ACQ pid=6083 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:59.280852 sshd[6083]: Connection closed by 10.0.0.1 port 39238 Jan 19 12:10:59.281598 sshd-session[6079]: pam_unix(sshd:session): session closed for user core Jan 19 12:10:59.285000 audit[6079]: USER_END pid=6079 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:59.347761 kernel: audit: type=1106 audit(1768824659.285:885): pid=6079 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:59.293385 systemd-logind[1598]: Session 24 logged out. Waiting for processes to exit. Jan 19 12:10:59.301893 systemd[1]: sshd@22-10.0.0.55:22-10.0.0.1:39238.service: Deactivated successfully. Jan 19 12:10:59.311519 systemd[1]: session-24.scope: Deactivated successfully. Jan 19 12:10:59.318615 systemd-logind[1598]: Removed session 24. Jan 19 12:10:59.286000 audit[6079]: CRED_DISP pid=6079 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:59.398851 kernel: audit: type=1104 audit(1768824659.286:886): pid=6079 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:10:59.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.55:22-10.0.0.1:39238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:10:59.436016 containerd[1618]: time="2026-01-19T12:10:59.435054656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 19 12:10:59.523985 containerd[1618]: time="2026-01-19T12:10:59.522341145Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:10:59.528386 containerd[1618]: time="2026-01-19T12:10:59.527753294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 19 12:10:59.528386 containerd[1618]: time="2026-01-19T12:10:59.527830228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 19 12:10:59.528636 kubelet[2862]: E0119 12:10:59.527964 2862 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:10:59.528636 kubelet[2862]: E0119 12:10:59.528006 2862 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:10:59.531670 kubelet[2862]: E0119 12:10:59.528081 2862 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-595df97b5c-7r9mw_calico-apiserver(57042a5e-1534-485c-abeb-75f1e57f8cf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 19 12:10:59.531670 kubelet[2862]: E0119 12:10:59.531325 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-595df97b5c-7r9mw" podUID="57042a5e-1534-485c-abeb-75f1e57f8cf0" Jan 19 12:11:00.431786 kubelet[2862]: E0119 12:11:00.430918 2862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9qp6h" podUID="c862d411-4d4f-4a97-b967-49e1eb15851d"