Jan 27 12:52:55.476124 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 10:13:49 -00 2026 Jan 27 12:52:55.476147 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b839912e96169b2be69ecc38c22dede1b19843035b80450c55f71e4c748b699 Jan 27 12:52:55.476155 kernel: BIOS-provided physical RAM map: Jan 27 12:52:55.476164 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 27 12:52:55.476170 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 27 12:52:55.476176 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 27 12:52:55.476183 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 27 12:52:55.476189 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 27 12:52:55.476195 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 27 12:52:55.476201 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 27 12:52:55.476207 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 27 12:52:55.476215 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 27 12:52:55.476221 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 27 12:52:55.476227 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 27 12:52:55.476234 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 27 12:52:55.476240 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 27 12:52:55.476249 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 27 12:52:55.476255 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 27 12:52:55.476261 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 27 12:52:55.476268 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 27 12:52:55.476274 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 27 12:52:55.476280 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 27 12:52:55.476287 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 27 12:52:55.476293 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 27 12:52:55.476299 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 27 12:52:55.476305 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 27 12:52:55.476314 kernel: NX (Execute Disable) protection: active Jan 27 12:52:55.476355 kernel: APIC: Static calls initialized Jan 27 12:52:55.476362 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 27 12:52:55.476369 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 27 12:52:55.476375 kernel: extended physical RAM map: Jan 27 12:52:55.476381 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 27 12:52:55.476388 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 27 12:52:55.476394 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 27 12:52:55.476400 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 27 12:52:55.476407 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 27 12:52:55.476413 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 27 12:52:55.476422 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 27 12:52:55.476428 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 27 12:52:55.476435 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 27 12:52:55.476444 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 27 12:52:55.476453 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 27 12:52:55.476459 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 27 12:52:55.476466 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 27 12:52:55.476473 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 27 12:52:55.476480 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 27 12:52:55.476486 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 27 12:52:55.476493 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 27 12:52:55.476500 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 27 12:52:55.476507 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 27 12:52:55.476515 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 27 12:52:55.476522 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 27 12:52:55.476528 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 27 12:52:55.476535 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 27 12:52:55.476542 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 27 12:52:55.476548 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 27 12:52:55.476555 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 27 12:52:55.476562 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 27 12:52:55.476632 kernel: efi: EFI v2.7 by EDK II Jan 27 12:52:55.476639 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 27 12:52:55.476646 kernel: random: crng init done Jan 27 12:52:55.476656 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 27 12:52:55.476662 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 27 12:52:55.476669 kernel: secureboot: Secure boot disabled Jan 27 12:52:55.476676 kernel: SMBIOS 2.8 present. Jan 27 12:52:55.476682 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 27 12:52:55.476689 kernel: DMI: Memory slots populated: 1/1 Jan 27 12:52:55.476695 kernel: Hypervisor detected: KVM Jan 27 12:52:55.476702 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 27 12:52:55.476709 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 27 12:52:55.476716 kernel: kvm-clock: using sched offset of 6656894518 cycles Jan 27 12:52:55.476722 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 27 12:52:55.476732 kernel: tsc: Detected 2445.424 MHz processor Jan 27 12:52:55.476739 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 27 12:52:55.476746 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 27 12:52:55.476753 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 27 12:52:55.476760 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 27 12:52:55.476767 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 27 12:52:55.476774 kernel: Using GB pages for direct mapping Jan 27 12:52:55.476783 kernel: ACPI: Early table checksum verification disabled Jan 27 12:52:55.476790 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 27 12:52:55.476797 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 27 12:52:55.476804 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 27 12:52:55.476811 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 27 12:52:55.476818 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 27 12:52:55.476825 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 27 12:52:55.476832 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 27 12:52:55.476841 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 27 12:52:55.476848 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 27 12:52:55.476855 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 27 12:52:55.476862 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 27 12:52:55.476869 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 27 12:52:55.476876 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 27 12:52:55.476883 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 27 12:52:55.476892 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 27 12:52:55.476899 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 27 12:52:55.476906 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 27 12:52:55.476912 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 27 12:52:55.476919 kernel: No NUMA configuration found Jan 27 12:52:55.476926 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 27 12:52:55.476933 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 27 12:52:55.476942 kernel: Zone ranges: Jan 27 12:52:55.476949 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 27 12:52:55.476956 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 27 12:52:55.476963 kernel: Normal empty Jan 27 12:52:55.476970 kernel: Device empty Jan 27 12:52:55.476977 kernel: Movable zone start for each node Jan 27 12:52:55.476983 kernel: Early memory node ranges Jan 27 12:52:55.476990 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 27 12:52:55.476999 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 27 12:52:55.477006 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 27 12:52:55.477013 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 27 12:52:55.477019 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 27 12:52:55.477026 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 27 12:52:55.477033 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 27 12:52:55.477040 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 27 12:52:55.477047 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 27 12:52:55.477056 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 27 12:52:55.477069 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 27 12:52:55.477078 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 27 12:52:55.477085 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 27 12:52:55.477092 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 27 12:52:55.477099 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 27 12:52:55.477107 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 27 12:52:55.477114 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 27 12:52:55.477121 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 27 12:52:55.477130 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 27 12:52:55.477138 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 27 12:52:55.477145 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 27 12:52:55.477152 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 27 12:52:55.477159 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 27 12:52:55.477169 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 27 12:52:55.477176 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 27 12:52:55.477183 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 27 12:52:55.477190 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 27 12:52:55.477198 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 27 12:52:55.477205 kernel: TSC deadline timer available Jan 27 12:52:55.477212 kernel: CPU topo: Max. logical packages: 1 Jan 27 12:52:55.477221 kernel: CPU topo: Max. logical dies: 1 Jan 27 12:52:55.477228 kernel: CPU topo: Max. dies per package: 1 Jan 27 12:52:55.477235 kernel: CPU topo: Max. threads per core: 1 Jan 27 12:52:55.477242 kernel: CPU topo: Num. cores per package: 4 Jan 27 12:52:55.477249 kernel: CPU topo: Num. threads per package: 4 Jan 27 12:52:55.477256 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 27 12:52:55.477263 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 27 12:52:55.477271 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 27 12:52:55.477280 kernel: kvm-guest: setup PV sched yield Jan 27 12:52:55.477287 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 27 12:52:55.477294 kernel: Booting paravirtualized kernel on KVM Jan 27 12:52:55.477302 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 27 12:52:55.477309 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 27 12:52:55.477316 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 27 12:52:55.477354 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 27 12:52:55.477364 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 27 12:52:55.477372 kernel: kvm-guest: PV spinlocks enabled Jan 27 12:52:55.477379 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 27 12:52:55.477387 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b839912e96169b2be69ecc38c22dede1b19843035b80450c55f71e4c748b699 Jan 27 12:52:55.477395 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 27 12:52:55.477403 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 27 12:52:55.477412 kernel: Fallback order for Node 0: 0 Jan 27 12:52:55.477419 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 27 12:52:55.477427 kernel: Policy zone: DMA32 Jan 27 12:52:55.477434 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 27 12:52:55.477441 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 27 12:52:55.477453 kernel: ftrace: allocating 40128 entries in 157 pages Jan 27 12:52:55.477467 kernel: ftrace: allocated 157 pages with 5 groups Jan 27 12:52:55.477479 kernel: Dynamic Preempt: voluntary Jan 27 12:52:55.477494 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 27 12:52:55.477506 kernel: rcu: RCU event tracing is enabled. Jan 27 12:52:55.477516 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 27 12:52:55.477527 kernel: Trampoline variant of Tasks RCU enabled. Jan 27 12:52:55.477537 kernel: Rude variant of Tasks RCU enabled. Jan 27 12:52:55.477550 kernel: Tracing variant of Tasks RCU enabled. Jan 27 12:52:55.477561 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 27 12:52:55.477701 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 27 12:52:55.477713 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 27 12:52:55.477727 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 27 12:52:55.477738 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 27 12:52:55.477748 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 27 12:52:55.477759 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 27 12:52:55.477769 kernel: Console: colour dummy device 80x25 Jan 27 12:52:55.477784 kernel: printk: legacy console [ttyS0] enabled Jan 27 12:52:55.477795 kernel: ACPI: Core revision 20240827 Jan 27 12:52:55.477808 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 27 12:52:55.477820 kernel: APIC: Switch to symmetric I/O mode setup Jan 27 12:52:55.477830 kernel: x2apic enabled Jan 27 12:52:55.477840 kernel: APIC: Switched APIC routing to: physical x2apic Jan 27 12:52:55.477850 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 27 12:52:55.477866 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 27 12:52:55.477879 kernel: kvm-guest: setup PV IPIs Jan 27 12:52:55.477889 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 27 12:52:55.477899 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 27 12:52:55.477910 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 27 12:52:55.477922 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 27 12:52:55.477935 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 27 12:52:55.477949 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 27 12:52:55.477960 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 27 12:52:55.477970 kernel: Spectre V2 : Mitigation: Retpolines Jan 27 12:52:55.477980 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 27 12:52:55.477991 kernel: Speculative Store Bypass: Vulnerable Jan 27 12:52:55.478004 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 27 12:52:55.478016 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 27 12:52:55.478030 kernel: active return thunk: srso_alias_return_thunk Jan 27 12:52:55.478040 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 27 12:52:55.478051 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 27 12:52:55.478064 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 27 12:52:55.478077 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 27 12:52:55.478084 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 27 12:52:55.478092 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 27 12:52:55.478101 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 27 12:52:55.478109 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 27 12:52:55.478116 kernel: Freeing SMP alternatives memory: 32K Jan 27 12:52:55.478124 kernel: pid_max: default: 32768 minimum: 301 Jan 27 12:52:55.478131 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 27 12:52:55.478138 kernel: landlock: Up and running. Jan 27 12:52:55.478145 kernel: SELinux: Initializing. Jan 27 12:52:55.478154 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 27 12:52:55.478162 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 27 12:52:55.478169 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 27 12:52:55.478177 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 27 12:52:55.478184 kernel: signal: max sigframe size: 1776 Jan 27 12:52:55.478191 kernel: rcu: Hierarchical SRCU implementation. Jan 27 12:52:55.478199 kernel: rcu: Max phase no-delay instances is 400. Jan 27 12:52:55.478208 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 27 12:52:55.478216 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 27 12:52:55.478223 kernel: smp: Bringing up secondary CPUs ... Jan 27 12:52:55.478230 kernel: smpboot: x86: Booting SMP configuration: Jan 27 12:52:55.478237 kernel: .... node #0, CPUs: #1 #2 #3 Jan 27 12:52:55.478245 kernel: smp: Brought up 1 node, 4 CPUs Jan 27 12:52:55.478252 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 27 12:52:55.478262 kernel: Memory: 2439048K/2565800K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15532K init, 2504K bss, 120816K reserved, 0K cma-reserved) Jan 27 12:52:55.478269 kernel: devtmpfs: initialized Jan 27 12:52:55.478277 kernel: x86/mm: Memory block size: 128MB Jan 27 12:52:55.478284 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 27 12:52:55.478292 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 27 12:52:55.478299 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 27 12:52:55.478306 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 27 12:52:55.478316 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 27 12:52:55.478364 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 27 12:52:55.478372 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 27 12:52:55.478380 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 27 12:52:55.478387 kernel: pinctrl core: initialized pinctrl subsystem Jan 27 12:52:55.478394 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 27 12:52:55.478402 kernel: audit: initializing netlink subsys (disabled) Jan 27 12:52:55.478412 kernel: audit: type=2000 audit(1769518371.371:1): state=initialized audit_enabled=0 res=1 Jan 27 12:52:55.478419 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 27 12:52:55.478427 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 27 12:52:55.478434 kernel: cpuidle: using governor menu Jan 27 12:52:55.478441 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 27 12:52:55.478449 kernel: dca service started, version 1.12.1 Jan 27 12:52:55.478456 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 27 12:52:55.478465 kernel: PCI: Using configuration type 1 for base access Jan 27 12:52:55.478473 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 27 12:52:55.478480 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 27 12:52:55.478487 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 27 12:52:55.478495 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 27 12:52:55.478502 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 27 12:52:55.478509 kernel: ACPI: Added _OSI(Module Device) Jan 27 12:52:55.478518 kernel: ACPI: Added _OSI(Processor Device) Jan 27 12:52:55.478525 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 27 12:52:55.478533 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 27 12:52:55.478540 kernel: ACPI: Interpreter enabled Jan 27 12:52:55.478547 kernel: ACPI: PM: (supports S0 S3 S5) Jan 27 12:52:55.478554 kernel: ACPI: Using IOAPIC for interrupt routing Jan 27 12:52:55.478561 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 27 12:52:55.478614 kernel: PCI: Using E820 reservations for host bridge windows Jan 27 12:52:55.478625 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 27 12:52:55.478632 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 27 12:52:55.478875 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 27 12:52:55.479056 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 27 12:52:55.479229 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 27 12:52:55.479243 kernel: PCI host bridge to bus 0000:00 Jan 27 12:52:55.479456 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 27 12:52:55.479684 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 27 12:52:55.479844 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 27 12:52:55.479998 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 27 12:52:55.480152 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 27 12:52:55.480312 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 27 12:52:55.480676 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 27 12:52:55.480880 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 27 12:52:55.481389 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 27 12:52:55.481876 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 27 12:52:55.482055 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 27 12:52:55.482221 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 27 12:52:55.482431 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 27 12:52:55.482686 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 27 12:52:55.482859 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 27 12:52:55.483030 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 27 12:52:55.483203 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 27 12:52:55.483846 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 27 12:52:55.484416 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 27 12:52:55.485131 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 27 12:52:55.485684 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 27 12:52:55.486239 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 27 12:52:55.486791 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 27 12:52:55.487398 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 27 12:52:55.487942 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 27 12:52:55.488185 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 27 12:52:55.488439 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 27 12:52:55.488707 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 27 12:52:55.488889 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 27 12:52:55.489058 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 27 12:52:55.489230 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 27 12:52:55.489450 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 27 12:52:55.489679 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 27 12:52:55.489696 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 27 12:52:55.489704 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 27 12:52:55.489712 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 27 12:52:55.489719 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 27 12:52:55.489727 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 27 12:52:55.489734 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 27 12:52:55.489741 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 27 12:52:55.489751 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 27 12:52:55.489759 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 27 12:52:55.489766 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 27 12:52:55.489773 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 27 12:52:55.489781 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 27 12:52:55.489788 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 27 12:52:55.489795 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 27 12:52:55.489805 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 27 12:52:55.489812 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 27 12:52:55.489819 kernel: iommu: Default domain type: Translated Jan 27 12:52:55.489827 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 27 12:52:55.489834 kernel: efivars: Registered efivars operations Jan 27 12:52:55.489841 kernel: PCI: Using ACPI for IRQ routing Jan 27 12:52:55.489849 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 27 12:52:55.489858 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 27 12:52:55.489865 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 27 12:52:55.489872 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 27 12:52:55.489880 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 27 12:52:55.489887 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 27 12:52:55.489894 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 27 12:52:55.489902 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 27 12:52:55.489911 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 27 12:52:55.490078 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 27 12:52:55.490244 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 27 12:52:55.490456 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 27 12:52:55.490467 kernel: vgaarb: loaded Jan 27 12:52:55.490475 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 27 12:52:55.490483 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 27 12:52:55.490494 kernel: clocksource: Switched to clocksource kvm-clock Jan 27 12:52:55.490501 kernel: VFS: Disk quotas dquot_6.6.0 Jan 27 12:52:55.490509 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 27 12:52:55.490516 kernel: pnp: PnP ACPI init Jan 27 12:52:55.490757 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 27 12:52:55.490770 kernel: pnp: PnP ACPI: found 6 devices Jan 27 12:52:55.490782 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 27 12:52:55.490793 kernel: NET: Registered PF_INET protocol family Jan 27 12:52:55.490801 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 27 12:52:55.490809 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 27 12:52:55.490817 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 27 12:52:55.490824 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 27 12:52:55.490846 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 27 12:52:55.490857 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 27 12:52:55.490865 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 27 12:52:55.490873 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 27 12:52:55.490880 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 27 12:52:55.490888 kernel: NET: Registered PF_XDP protocol family Jan 27 12:52:55.491057 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 27 12:52:55.491224 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 27 12:52:55.491437 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 27 12:52:55.491654 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 27 12:52:55.491841 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 27 12:52:55.492051 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 27 12:52:55.492264 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 27 12:52:55.492526 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 27 12:52:55.492544 kernel: PCI: CLS 0 bytes, default 64 Jan 27 12:52:55.492553 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 27 12:52:55.492561 kernel: Initialise system trusted keyrings Jan 27 12:52:55.492643 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 27 12:52:55.492651 kernel: Key type asymmetric registered Jan 27 12:52:55.492659 kernel: Asymmetric key parser 'x509' registered Jan 27 12:52:55.492666 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 27 12:52:55.492677 kernel: io scheduler mq-deadline registered Jan 27 12:52:55.492685 kernel: io scheduler kyber registered Jan 27 12:52:55.492693 kernel: io scheduler bfq registered Jan 27 12:52:55.492701 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 27 12:52:55.492709 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 27 12:52:55.492717 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 27 12:52:55.492725 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 27 12:52:55.492735 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 27 12:52:55.492743 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 27 12:52:55.492751 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 27 12:52:55.492759 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 27 12:52:55.492766 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 27 12:52:55.492954 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 27 12:52:55.492967 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 27 12:52:55.493128 kernel: rtc_cmos 00:04: registered as rtc0 Jan 27 12:52:55.493290 kernel: rtc_cmos 00:04: setting system clock to 2026-01-27T12:52:53 UTC (1769518373) Jan 27 12:52:55.493666 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 27 12:52:55.493680 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 27 12:52:55.493693 kernel: efifb: probing for efifb Jan 27 12:52:55.493701 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 27 12:52:55.493709 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 27 12:52:55.493717 kernel: efifb: scrolling: redraw Jan 27 12:52:55.493725 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 27 12:52:55.493733 kernel: Console: switching to colour frame buffer device 160x50 Jan 27 12:52:55.493741 kernel: fb0: EFI VGA frame buffer device Jan 27 12:52:55.493751 kernel: pstore: Using crash dump compression: deflate Jan 27 12:52:55.493759 kernel: pstore: Registered efi_pstore as persistent store backend Jan 27 12:52:55.493766 kernel: NET: Registered PF_INET6 protocol family Jan 27 12:52:55.493774 kernel: Segment Routing with IPv6 Jan 27 12:52:55.493782 kernel: In-situ OAM (IOAM) with IPv6 Jan 27 12:52:55.493790 kernel: NET: Registered PF_PACKET protocol family Jan 27 12:52:55.493798 kernel: Key type dns_resolver registered Jan 27 12:52:55.493805 kernel: IPI shorthand broadcast: enabled Jan 27 12:52:55.493815 kernel: sched_clock: Marking stable (2347029971, 895040472)->(3467872691, -225802248) Jan 27 12:52:55.493823 kernel: registered taskstats version 1 Jan 27 12:52:55.493831 kernel: Loading compiled-in X.509 certificates Jan 27 12:52:55.493839 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: 6484c7cab6759552a733ebda9eed387628fa30ee' Jan 27 12:52:55.493847 kernel: Demotion targets for Node 0: null Jan 27 12:52:55.493854 kernel: Key type .fscrypt registered Jan 27 12:52:55.493862 kernel: Key type fscrypt-provisioning registered Jan 27 12:52:55.493872 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 27 12:52:55.493880 kernel: ima: Allocated hash algorithm: sha1 Jan 27 12:52:55.493888 kernel: ima: No architecture policies found Jan 27 12:52:55.493896 kernel: clk: Disabling unused clocks Jan 27 12:52:55.493904 kernel: Freeing unused kernel image (initmem) memory: 15532K Jan 27 12:52:55.493912 kernel: Write protecting the kernel read-only data: 47104k Jan 27 12:52:55.493921 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 27 12:52:55.493929 kernel: Run /init as init process Jan 27 12:52:55.493937 kernel: with arguments: Jan 27 12:52:55.493945 kernel: /init Jan 27 12:52:55.493953 kernel: with environment: Jan 27 12:52:55.493960 kernel: HOME=/ Jan 27 12:52:55.493968 kernel: TERM=linux Jan 27 12:52:55.493976 kernel: SCSI subsystem initialized Jan 27 12:52:55.493985 kernel: libata version 3.00 loaded. Jan 27 12:52:55.494165 kernel: ahci 0000:00:1f.2: version 3.0 Jan 27 12:52:55.494177 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 27 12:52:55.494395 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 27 12:52:55.494624 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 27 12:52:55.494805 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 27 12:52:55.494994 kernel: scsi host0: ahci Jan 27 12:52:55.495180 kernel: scsi host1: ahci Jan 27 12:52:55.495403 kernel: scsi host2: ahci Jan 27 12:52:55.495699 kernel: scsi host3: ahci Jan 27 12:52:55.495892 kernel: scsi host4: ahci Jan 27 12:52:55.496071 kernel: scsi host5: ahci Jan 27 12:52:55.496087 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Jan 27 12:52:55.496095 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Jan 27 12:52:55.496103 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Jan 27 12:52:55.496111 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Jan 27 12:52:55.496119 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Jan 27 12:52:55.496126 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Jan 27 12:52:55.496137 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 27 12:52:55.496144 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 27 12:52:55.496152 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 27 12:52:55.496160 kernel: ata3.00: LPM support broken, forcing max_power Jan 27 12:52:55.496168 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 27 12:52:55.496176 kernel: ata3.00: applying bridge limits Jan 27 12:52:55.496185 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 27 12:52:55.496195 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 27 12:52:55.496203 kernel: ata3.00: LPM support broken, forcing max_power Jan 27 12:52:55.496210 kernel: ata3.00: configured for UDMA/100 Jan 27 12:52:55.496218 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 27 12:52:55.496464 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 27 12:52:55.496760 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 27 12:52:55.496935 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 27 12:52:55.496951 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 27 12:52:55.496959 kernel: GPT:16515071 != 27000831 Jan 27 12:52:55.496967 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 27 12:52:55.496974 kernel: GPT:16515071 != 27000831 Jan 27 12:52:55.496982 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 27 12:52:55.496990 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 27 12:52:55.497257 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 27 12:52:55.497273 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 27 12:52:55.497503 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 27 12:52:55.497515 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 27 12:52:55.497524 kernel: device-mapper: uevent: version 1.0.3 Jan 27 12:52:55.497532 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 27 12:52:55.497540 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 27 12:52:55.497551 kernel: raid6: avx2x4 gen() 38258 MB/s Jan 27 12:52:55.497559 kernel: raid6: avx2x2 gen() 37135 MB/s Jan 27 12:52:55.497618 kernel: raid6: avx2x1 gen() 28286 MB/s Jan 27 12:52:55.497627 kernel: raid6: using algorithm avx2x4 gen() 38258 MB/s Jan 27 12:52:55.497635 kernel: raid6: .... xor() 5084 MB/s, rmw enabled Jan 27 12:52:55.497643 kernel: raid6: using avx2x2 recovery algorithm Jan 27 12:52:55.497652 kernel: xor: automatically using best checksumming function avx Jan 27 12:52:55.497662 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 27 12:52:55.497671 kernel: BTRFS: device fsid 268ba60b-442b-419d-aa1b-56d952d69f85 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (182) Jan 27 12:52:55.497679 kernel: BTRFS info (device dm-0): first mount of filesystem 268ba60b-442b-419d-aa1b-56d952d69f85 Jan 27 12:52:55.497687 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 27 12:52:55.497695 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 27 12:52:55.497703 kernel: BTRFS info (device dm-0): enabling free space tree Jan 27 12:52:55.497711 kernel: loop: module loaded Jan 27 12:52:55.497721 kernel: loop0: detected capacity change from 0 to 100536 Jan 27 12:52:55.497729 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 27 12:52:55.497738 systemd[1]: Successfully made /usr/ read-only. Jan 27 12:52:55.497749 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 27 12:52:55.497758 systemd[1]: Detected virtualization kvm. Jan 27 12:52:55.497766 systemd[1]: Detected architecture x86-64. Jan 27 12:52:55.497776 systemd[1]: Running in initrd. Jan 27 12:52:55.497784 systemd[1]: No hostname configured, using default hostname. Jan 27 12:52:55.497792 systemd[1]: Hostname set to . Jan 27 12:52:55.497800 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 27 12:52:55.497809 systemd[1]: Queued start job for default target initrd.target. Jan 27 12:52:55.497817 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 27 12:52:55.497825 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 27 12:52:55.497836 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 27 12:52:55.497844 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 27 12:52:55.497853 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 27 12:52:55.497862 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 27 12:52:55.497871 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 27 12:52:55.497881 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 27 12:52:55.497889 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 27 12:52:55.497898 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 27 12:52:55.497906 systemd[1]: Reached target paths.target - Path Units. Jan 27 12:52:55.497914 systemd[1]: Reached target slices.target - Slice Units. Jan 27 12:52:55.497922 systemd[1]: Reached target swap.target - Swaps. Jan 27 12:52:55.497930 systemd[1]: Reached target timers.target - Timer Units. Jan 27 12:52:55.497940 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 27 12:52:55.497949 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 27 12:52:55.497957 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 27 12:52:55.497965 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 27 12:52:55.497973 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 27 12:52:55.497981 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 27 12:52:55.497989 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 27 12:52:55.498000 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 27 12:52:55.498008 systemd[1]: Reached target sockets.target - Socket Units. Jan 27 12:52:55.498016 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 27 12:52:55.498025 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 27 12:52:55.498033 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 27 12:52:55.498041 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 27 12:52:55.498052 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 27 12:52:55.498061 systemd[1]: Starting systemd-fsck-usr.service... Jan 27 12:52:55.498069 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 27 12:52:55.498077 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 27 12:52:55.498085 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 27 12:52:55.498096 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 27 12:52:55.498104 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 27 12:52:55.498112 systemd[1]: Finished systemd-fsck-usr.service. Jan 27 12:52:55.498120 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 27 12:52:55.498155 systemd-journald[318]: Collecting audit messages is enabled. Jan 27 12:52:55.498177 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 27 12:52:55.498185 kernel: Bridge firewalling registered Jan 27 12:52:55.498194 systemd-journald[318]: Journal started Jan 27 12:52:55.498214 systemd-journald[318]: Runtime Journal (/run/log/journal/0b9bbdc1dfe244cabf927d45adfbbde3) is 6M, max 48M, 42M free. Jan 27 12:52:55.500944 systemd[1]: Started systemd-journald.service - Journal Service. Jan 27 12:52:55.501375 systemd-modules-load[321]: Inserted module 'br_netfilter' Jan 27 12:52:55.509145 kernel: audit: type=1130 audit(1769518375.501:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:55.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:55.509209 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 27 12:52:55.526087 kernel: audit: type=1130 audit(1769518375.509:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:55.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:55.526397 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 12:52:55.539139 kernel: audit: type=1130 audit(1769518375.525:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:55.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:55.537206 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 27 12:52:55.546550 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 27 12:52:55.547460 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 27 12:52:55.572042 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 27 12:52:55.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:55.573938 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 27 12:52:55.584389 kernel: audit: type=1130 audit(1769518375.571:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:55.603826 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 27 12:52:55.615197 kernel: audit: type=1130 audit(1769518375.605:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:55.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:55.614445 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 27 12:52:55.618390 systemd-tmpfiles[337]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 27 12:52:55.640421 kernel: audit: type=1130 audit(1769518375.627:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:55.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:55.624822 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 27 12:52:55.646128 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 27 12:52:55.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:55.654110 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 27 12:52:55.676932 kernel: audit: type=1130 audit(1769518375.645:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:55.676959 kernel: audit: type=1130 audit(1769518375.661:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:55.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:55.677009 dracut-cmdline[353]: dracut-109 Jan 27 12:52:55.677009 dracut-cmdline[353]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b839912e96169b2be69ecc38c22dede1b19843035b80450c55f71e4c748b699 Jan 27 12:52:55.700419 kernel: audit: type=1334 audit(1769518375.678:10): prog-id=6 op=LOAD Jan 27 12:52:55.678000 audit: BPF prog-id=6 op=LOAD Jan 27 12:52:55.680489 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 27 12:52:55.754392 systemd-resolved[376]: Positive Trust Anchors: Jan 27 12:52:55.754424 systemd-resolved[376]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 27 12:52:55.754428 systemd-resolved[376]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 27 12:52:55.754455 systemd-resolved[376]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 27 12:52:55.811660 systemd-resolved[376]: Defaulting to hostname 'linux'. Jan 27 12:52:55.812834 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 27 12:52:55.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:55.816118 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 27 12:52:55.904653 kernel: Loading iSCSI transport class v2.0-870. Jan 27 12:52:55.923621 kernel: iscsi: registered transport (tcp) Jan 27 12:52:55.955604 kernel: iscsi: registered transport (qla4xxx) Jan 27 12:52:55.955696 kernel: QLogic iSCSI HBA Driver Jan 27 12:52:55.992521 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 27 12:52:56.037876 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 27 12:52:56.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:56.039982 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 27 12:52:56.117709 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 27 12:52:56.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:56.119561 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 27 12:52:56.135819 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 27 12:52:56.184935 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 27 12:52:56.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:56.185000 audit: BPF prog-id=7 op=LOAD Jan 27 12:52:56.185000 audit: BPF prog-id=8 op=LOAD Jan 27 12:52:56.188761 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 27 12:52:56.235220 systemd-udevd[586]: Using default interface naming scheme 'v257'. Jan 27 12:52:56.252975 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 27 12:52:56.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:56.261894 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 27 12:52:56.306554 dracut-pre-trigger[654]: rd.md=0: removing MD RAID activation Jan 27 12:52:56.323143 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 27 12:52:56.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:56.331000 audit: BPF prog-id=9 op=LOAD Jan 27 12:52:56.333122 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 27 12:52:56.360053 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 27 12:52:56.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:56.368507 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 27 12:52:56.422638 systemd-networkd[712]: lo: Link UP Jan 27 12:52:56.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:56.422665 systemd-networkd[712]: lo: Gained carrier Jan 27 12:52:56.423443 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 27 12:52:56.430944 systemd[1]: Reached target network.target - Network. Jan 27 12:52:56.495900 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 27 12:52:56.520764 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 27 12:52:56.520795 kernel: audit: type=1130 audit(1769518376.500:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:56.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:56.520795 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 27 12:52:56.577823 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 27 12:52:56.604936 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 27 12:52:56.637553 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 27 12:52:56.644437 kernel: cryptd: max_cpu_qlen set to 1000 Jan 27 12:52:56.675629 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 27 12:52:56.678698 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 27 12:52:56.689984 kernel: AES CTR mode by8 optimization enabled Jan 27 12:52:56.699927 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 27 12:52:56.733429 kernel: audit: type=1131 audit(1769518376.713:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:56.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:56.706912 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 27 12:52:56.707122 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 12:52:56.714658 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 27 12:52:56.715812 systemd-networkd[712]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 27 12:52:56.715818 systemd-networkd[712]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 27 12:52:56.717423 systemd-networkd[712]: eth0: Link UP Jan 27 12:52:56.717713 systemd-networkd[712]: eth0: Gained carrier Jan 27 12:52:56.717723 systemd-networkd[712]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 27 12:52:56.736835 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 27 12:52:56.781749 systemd-networkd[712]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 27 12:52:56.782386 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 27 12:52:56.806644 kernel: audit: type=1130 audit(1769518376.785:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:56.806670 kernel: audit: type=1131 audit(1769518376.785:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:56.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:56.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:56.782492 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 12:52:56.789749 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 27 12:52:56.826933 disk-uuid[840]: Primary Header is updated. Jan 27 12:52:56.826933 disk-uuid[840]: Secondary Entries is updated. Jan 27 12:52:56.826933 disk-uuid[840]: Secondary Header is updated. Jan 27 12:52:56.864832 kernel: audit: type=1130 audit(1769518376.829:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:56.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:56.829205 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 27 12:52:56.838983 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 27 12:52:56.845742 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 27 12:52:56.850162 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 27 12:52:56.857127 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 27 12:52:56.896826 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 12:52:56.913028 kernel: audit: type=1130 audit(1769518376.903:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:56.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:56.931218 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 27 12:52:56.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:56.950702 kernel: audit: type=1130 audit(1769518376.939:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:57.879520 disk-uuid[844]: Warning: The kernel is still using the old partition table. Jan 27 12:52:57.879520 disk-uuid[844]: The new table will be used at the next reboot or after you Jan 27 12:52:57.879520 disk-uuid[844]: run partprobe(8) or kpartx(8) Jan 27 12:52:57.879520 disk-uuid[844]: The operation has completed successfully. Jan 27 12:52:57.896849 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 27 12:52:57.916275 kernel: audit: type=1130 audit(1769518377.896:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:57.916314 kernel: audit: type=1131 audit(1769518377.896:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:57.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:57.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:57.897014 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 27 12:52:57.916632 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 27 12:52:57.974297 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (865) Jan 27 12:52:57.974381 kernel: BTRFS info (device vda6): first mount of filesystem 9734ba71-0bae-447a-acd4-ca25b06d0b18 Jan 27 12:52:57.974407 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 27 12:52:57.984041 kernel: BTRFS info (device vda6): turning on async discard Jan 27 12:52:57.984073 kernel: BTRFS info (device vda6): enabling free space tree Jan 27 12:52:57.995629 kernel: BTRFS info (device vda6): last unmount of filesystem 9734ba71-0bae-447a-acd4-ca25b06d0b18 Jan 27 12:52:57.997306 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 27 12:52:58.012661 kernel: audit: type=1130 audit(1769518377.996:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:57.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:57.998698 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 27 12:52:58.120052 ignition[884]: Ignition 2.24.0 Jan 27 12:52:58.120088 ignition[884]: Stage: fetch-offline Jan 27 12:52:58.120126 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jan 27 12:52:58.120137 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 27 12:52:58.120218 ignition[884]: parsed url from cmdline: "" Jan 27 12:52:58.120222 ignition[884]: no config URL provided Jan 27 12:52:58.120227 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Jan 27 12:52:58.120237 ignition[884]: no config at "/usr/lib/ignition/user.ign" Jan 27 12:52:58.126688 ignition[884]: op(1): [started] loading QEMU firmware config module Jan 27 12:52:58.126695 ignition[884]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 27 12:52:58.150893 ignition[884]: op(1): [finished] loading QEMU firmware config module Jan 27 12:52:58.349522 ignition[884]: parsing config with SHA512: 96248c8d9b0d3712a0a06620761160f4c192184cfb6979aff0794228e2c769e8cb545131eb7df1b734fd172252dfdc2c5ccea795baef975ebc3a35975af4d067 Jan 27 12:52:58.355401 unknown[884]: fetched base config from "system" Jan 27 12:52:58.355417 unknown[884]: fetched user config from "qemu" Jan 27 12:52:58.362447 ignition[884]: fetch-offline: fetch-offline passed Jan 27 12:52:58.362528 ignition[884]: Ignition finished successfully Jan 27 12:52:58.366043 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 27 12:52:58.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:58.368404 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 27 12:52:58.369497 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 27 12:52:58.401854 systemd-networkd[712]: eth0: Gained IPv6LL Jan 27 12:52:58.410455 ignition[894]: Ignition 2.24.0 Jan 27 12:52:58.410487 ignition[894]: Stage: kargs Jan 27 12:52:58.410718 ignition[894]: no configs at "/usr/lib/ignition/base.d" Jan 27 12:52:58.410736 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 27 12:52:58.411693 ignition[894]: kargs: kargs passed Jan 27 12:52:58.411736 ignition[894]: Ignition finished successfully Jan 27 12:52:58.429492 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 27 12:52:58.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:58.436161 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 27 12:52:58.476934 ignition[901]: Ignition 2.24.0 Jan 27 12:52:58.476967 ignition[901]: Stage: disks Jan 27 12:52:58.477118 ignition[901]: no configs at "/usr/lib/ignition/base.d" Jan 27 12:52:58.477128 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 27 12:52:58.478496 ignition[901]: disks: disks passed Jan 27 12:52:58.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:58.484362 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 27 12:52:58.478554 ignition[901]: Ignition finished successfully Jan 27 12:52:58.488219 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 27 12:52:58.496478 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 27 12:52:58.502977 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 27 12:52:58.508213 systemd[1]: Reached target sysinit.target - System Initialization. Jan 27 12:52:58.514489 systemd[1]: Reached target basic.target - Basic System. Jan 27 12:52:58.530919 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 27 12:52:58.574294 systemd-fsck[910]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 27 12:52:58.580357 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 27 12:52:58.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:58.591924 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 27 12:52:58.736661 kernel: EXT4-fs (vda9): mounted filesystem f82d5d40-607d-4567-b2c1-7e3e0fab898a r/w with ordered data mode. Quota mode: none. Jan 27 12:52:58.738035 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 27 12:52:58.746014 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 27 12:52:58.754790 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 27 12:52:58.762079 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 27 12:52:58.762489 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 27 12:52:58.762527 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 27 12:52:58.762550 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 27 12:52:58.790025 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 27 12:52:58.793781 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 27 12:52:58.811854 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (918) Jan 27 12:52:58.811875 kernel: BTRFS info (device vda6): first mount of filesystem 9734ba71-0bae-447a-acd4-ca25b06d0b18 Jan 27 12:52:58.811887 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 27 12:52:58.818551 kernel: BTRFS info (device vda6): turning on async discard Jan 27 12:52:58.818642 kernel: BTRFS info (device vda6): enabling free space tree Jan 27 12:52:58.823157 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 27 12:52:59.013502 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 27 12:52:59.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:59.018870 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 27 12:52:59.023933 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 27 12:52:59.045413 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 27 12:52:59.050745 kernel: BTRFS info (device vda6): last unmount of filesystem 9734ba71-0bae-447a-acd4-ca25b06d0b18 Jan 27 12:52:59.070885 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 27 12:52:59.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:59.089100 ignition[1016]: INFO : Ignition 2.24.0 Jan 27 12:52:59.089100 ignition[1016]: INFO : Stage: mount Jan 27 12:52:59.093441 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 27 12:52:59.093441 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 27 12:52:59.093441 ignition[1016]: INFO : mount: mount passed Jan 27 12:52:59.093441 ignition[1016]: INFO : Ignition finished successfully Jan 27 12:52:59.106839 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 27 12:52:59.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:52:59.111628 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 27 12:52:59.144658 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 27 12:52:59.172932 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1027) Jan 27 12:52:59.172975 kernel: BTRFS info (device vda6): first mount of filesystem 9734ba71-0bae-447a-acd4-ca25b06d0b18 Jan 27 12:52:59.172997 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 27 12:52:59.183450 kernel: BTRFS info (device vda6): turning on async discard Jan 27 12:52:59.183482 kernel: BTRFS info (device vda6): enabling free space tree Jan 27 12:52:59.185508 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 27 12:52:59.224475 ignition[1044]: INFO : Ignition 2.24.0 Jan 27 12:52:59.224475 ignition[1044]: INFO : Stage: files Jan 27 12:52:59.231404 ignition[1044]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 27 12:52:59.231404 ignition[1044]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 27 12:52:59.231404 ignition[1044]: DEBUG : files: compiled without relabeling support, skipping Jan 27 12:52:59.231404 ignition[1044]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 27 12:52:59.231404 ignition[1044]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 27 12:52:59.254235 ignition[1044]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 27 12:52:59.254235 ignition[1044]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 27 12:52:59.254235 ignition[1044]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 27 12:52:59.254235 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 27 12:52:59.254235 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 27 12:52:59.237315 unknown[1044]: wrote ssh authorized keys file for user: core Jan 27 12:52:59.304925 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 27 12:52:59.394486 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 27 12:52:59.394486 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 27 12:52:59.407978 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 27 12:52:59.407978 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 27 12:52:59.407978 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 27 12:52:59.407978 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 27 12:52:59.407978 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 27 12:52:59.407978 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 27 12:52:59.407978 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 27 12:52:59.407978 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 27 12:52:59.407978 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 27 12:52:59.407978 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 27 12:52:59.407978 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 27 12:52:59.407978 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 27 12:52:59.407978 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 27 12:52:59.748929 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 27 12:53:00.048773 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 27 12:53:00.048773 ignition[1044]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 27 12:53:00.060627 ignition[1044]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 27 12:53:00.060627 ignition[1044]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 27 12:53:00.060627 ignition[1044]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 27 12:53:00.060627 ignition[1044]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 27 12:53:00.060627 ignition[1044]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 27 12:53:00.060627 ignition[1044]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 27 12:53:00.060627 ignition[1044]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 27 12:53:00.060627 ignition[1044]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 27 12:53:00.099218 ignition[1044]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 27 12:53:00.103857 ignition[1044]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 27 12:53:00.108959 ignition[1044]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 27 12:53:00.108959 ignition[1044]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 27 12:53:00.108959 ignition[1044]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 27 12:53:00.108959 ignition[1044]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 27 12:53:00.108959 ignition[1044]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 27 12:53:00.108959 ignition[1044]: INFO : files: files passed Jan 27 12:53:00.108959 ignition[1044]: INFO : Ignition finished successfully Jan 27 12:53:00.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.107111 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 27 12:53:00.111019 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 27 12:53:00.152704 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 27 12:53:00.156669 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 27 12:53:00.156809 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 27 12:53:00.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.180448 initrd-setup-root-after-ignition[1075]: grep: /sysroot/oem/oem-release: No such file or directory Jan 27 12:53:00.188316 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 27 12:53:00.193150 initrd-setup-root-after-ignition[1077]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 27 12:53:00.197920 initrd-setup-root-after-ignition[1081]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 27 12:53:00.204036 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 27 12:53:00.204449 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 27 12:53:00.213159 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 27 12:53:00.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.307266 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 27 12:53:00.307542 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 27 12:53:00.314364 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 27 12:53:00.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.317981 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 27 12:53:00.327217 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 27 12:53:00.328307 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 27 12:53:00.373785 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 27 12:53:00.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.376319 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 27 12:53:00.404970 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 27 12:53:00.405511 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 27 12:53:00.415427 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 27 12:53:00.419096 systemd[1]: Stopped target timers.target - Timer Units. Jan 27 12:53:00.427443 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 27 12:53:00.427702 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 27 12:53:00.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.437664 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 27 12:53:00.440993 systemd[1]: Stopped target basic.target - Basic System. Jan 27 12:53:00.449046 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 27 12:53:00.452797 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 27 12:53:00.462368 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 27 12:53:00.462638 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 27 12:53:00.475037 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 27 12:53:00.478290 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 27 12:53:00.484722 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 27 12:53:00.493959 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 27 12:53:00.497118 systemd[1]: Stopped target swap.target - Swaps. Jan 27 12:53:00.502228 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 27 12:53:00.502372 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 27 12:53:00.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.505135 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 27 12:53:00.510514 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 27 12:53:00.522982 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 27 12:53:00.529200 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 27 12:53:00.530994 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 27 12:53:00.531101 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 27 12:53:00.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.545055 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 27 12:53:00.545236 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 27 12:53:00.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.551724 systemd[1]: Stopped target paths.target - Path Units. Jan 27 12:53:00.557651 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 27 12:53:00.563318 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 27 12:53:00.563747 systemd[1]: Stopped target slices.target - Slice Units. Jan 27 12:53:00.576209 systemd[1]: Stopped target sockets.target - Socket Units. Jan 27 12:53:00.579292 systemd[1]: iscsid.socket: Deactivated successfully. Jan 27 12:53:00.579424 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 27 12:53:00.581979 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 27 12:53:00.582053 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 27 12:53:00.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.587020 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 27 12:53:00.587113 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 27 12:53:00.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.592497 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 27 12:53:00.592723 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 27 12:53:00.598084 systemd[1]: ignition-files.service: Deactivated successfully. Jan 27 12:53:00.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.598208 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 27 12:53:00.608429 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 27 12:53:00.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.610164 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 27 12:53:00.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.610299 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 27 12:53:00.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.616765 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 27 12:53:00.628727 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 27 12:53:00.628836 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 27 12:53:00.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.668705 ignition[1101]: INFO : Ignition 2.24.0 Jan 27 12:53:00.668705 ignition[1101]: INFO : Stage: umount Jan 27 12:53:00.668705 ignition[1101]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 27 12:53:00.668705 ignition[1101]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 27 12:53:00.668705 ignition[1101]: INFO : umount: umount passed Jan 27 12:53:00.668705 ignition[1101]: INFO : Ignition finished successfully Jan 27 12:53:00.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.631723 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 27 12:53:00.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.631852 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 27 12:53:00.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.644178 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 27 12:53:00.644280 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 27 12:53:00.660495 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 27 12:53:00.660716 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 27 12:53:00.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.671123 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 27 12:53:00.671262 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 27 12:53:00.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.677961 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 27 12:53:00.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.678392 systemd[1]: Stopped target network.target - Network. Jan 27 12:53:00.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.685626 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 27 12:53:00.688125 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 27 12:53:00.690842 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 27 12:53:00.690907 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 27 12:53:00.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.699446 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 27 12:53:00.766000 audit: BPF prog-id=9 op=UNLOAD Jan 27 12:53:00.699528 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 27 12:53:00.711500 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 27 12:53:00.778000 audit: BPF prog-id=6 op=UNLOAD Jan 27 12:53:00.711623 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 27 12:53:00.725062 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 27 12:53:00.728109 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 27 12:53:00.728534 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 27 12:53:00.728725 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 27 12:53:00.737241 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 27 12:53:00.737385 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 27 12:53:00.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.743429 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 27 12:53:00.743636 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 27 12:53:00.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.755005 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 27 12:53:00.755161 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 27 12:53:00.767372 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 27 12:53:00.768077 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 27 12:53:00.768136 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 27 12:53:00.783108 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 27 12:53:00.786058 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 27 12:53:00.786117 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 27 12:53:00.791861 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 27 12:53:00.791910 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 27 12:53:00.805975 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 27 12:53:00.806024 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 27 12:53:00.809719 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 27 12:53:00.866953 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 27 12:53:00.870024 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 27 12:53:00.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.870501 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 27 12:53:00.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.870686 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 27 12:53:00.883687 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 27 12:53:00.883762 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 27 12:53:00.889166 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 27 12:53:00.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.889209 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 27 12:53:00.892406 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 27 12:53:00.892476 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 27 12:53:00.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.904744 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 27 12:53:00.904798 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 27 12:53:00.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.913922 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 27 12:53:00.913976 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 27 12:53:00.930542 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 27 12:53:00.935207 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 27 12:53:00.935261 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 27 12:53:00.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.941797 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 27 12:53:00.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.941845 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 27 12:53:00.952895 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 27 12:53:00.952946 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 12:53:00.977017 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 27 12:53:00.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:00.977156 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 27 12:53:00.977833 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 27 12:53:00.993265 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 27 12:53:01.029239 systemd[1]: Switching root. Jan 27 12:53:01.083708 systemd-journald[318]: Journal stopped Jan 27 12:53:02.616397 systemd-journald[318]: Received SIGTERM from PID 1 (systemd). Jan 27 12:53:02.616464 kernel: SELinux: policy capability network_peer_controls=1 Jan 27 12:53:02.616478 kernel: SELinux: policy capability open_perms=1 Jan 27 12:53:02.616496 kernel: SELinux: policy capability extended_socket_class=1 Jan 27 12:53:02.616508 kernel: SELinux: policy capability always_check_network=0 Jan 27 12:53:02.616519 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 27 12:53:02.616538 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 27 12:53:02.616549 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 27 12:53:02.616560 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 27 12:53:02.616658 kernel: SELinux: policy capability userspace_initial_context=0 Jan 27 12:53:02.616681 systemd[1]: Successfully loaded SELinux policy in 71.311ms. Jan 27 12:53:02.616703 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.882ms. Jan 27 12:53:02.616716 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 27 12:53:02.616728 systemd[1]: Detected virtualization kvm. Jan 27 12:53:02.616740 systemd[1]: Detected architecture x86-64. Jan 27 12:53:02.616751 systemd[1]: Detected first boot. Jan 27 12:53:02.616763 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 27 12:53:02.616779 zram_generator::config[1146]: No configuration found. Jan 27 12:53:02.616792 kernel: Guest personality initialized and is inactive Jan 27 12:53:02.616803 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 27 12:53:02.616814 kernel: Initialized host personality Jan 27 12:53:02.616825 kernel: NET: Registered PF_VSOCK protocol family Jan 27 12:53:02.616836 systemd[1]: Populated /etc with preset unit settings. Jan 27 12:53:02.616848 kernel: kauditd_printk_skb: 56 callbacks suppressed Jan 27 12:53:02.616861 kernel: audit: type=1334 audit(1769518381.994:88): prog-id=12 op=LOAD Jan 27 12:53:02.616872 kernel: audit: type=1334 audit(1769518381.994:89): prog-id=3 op=UNLOAD Jan 27 12:53:02.616882 kernel: audit: type=1334 audit(1769518381.994:90): prog-id=13 op=LOAD Jan 27 12:53:02.616894 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 27 12:53:02.616905 kernel: audit: type=1334 audit(1769518381.994:91): prog-id=14 op=LOAD Jan 27 12:53:02.616916 kernel: audit: type=1334 audit(1769518381.994:92): prog-id=4 op=UNLOAD Jan 27 12:53:02.616928 kernel: audit: type=1334 audit(1769518381.994:93): prog-id=5 op=UNLOAD Jan 27 12:53:02.616941 kernel: audit: type=1131 audit(1769518381.996:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.616952 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 27 12:53:02.616964 kernel: audit: type=1130 audit(1769518382.032:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.616975 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 27 12:53:02.616987 kernel: audit: type=1131 audit(1769518382.032:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.617002 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 27 12:53:02.617016 kernel: audit: type=1334 audit(1769518382.032:97): prog-id=12 op=UNLOAD Jan 27 12:53:02.617028 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 27 12:53:02.617040 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 27 12:53:02.617051 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 27 12:53:02.617063 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 27 12:53:02.617076 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 27 12:53:02.617088 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 27 12:53:02.617099 systemd[1]: Created slice user.slice - User and Session Slice. Jan 27 12:53:02.617111 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 27 12:53:02.617123 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 27 12:53:02.617135 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 27 12:53:02.617147 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 27 12:53:02.617167 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 27 12:53:02.617181 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 27 12:53:02.617192 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 27 12:53:02.617204 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 27 12:53:02.617218 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 27 12:53:02.617229 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 27 12:53:02.617241 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 27 12:53:02.617252 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 27 12:53:02.617264 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 27 12:53:02.617275 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 27 12:53:02.617288 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 27 12:53:02.617302 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 27 12:53:02.617313 systemd[1]: Reached target slices.target - Slice Units. Jan 27 12:53:02.617325 systemd[1]: Reached target swap.target - Swaps. Jan 27 12:53:02.617375 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 27 12:53:02.617389 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 27 12:53:02.617400 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 27 12:53:02.617412 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 27 12:53:02.617424 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 27 12:53:02.617438 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 27 12:53:02.617450 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 27 12:53:02.617461 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 27 12:53:02.617473 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 27 12:53:02.617484 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 27 12:53:02.617496 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 27 12:53:02.617508 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 27 12:53:02.617521 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 27 12:53:02.617533 systemd[1]: Mounting media.mount - External Media Directory... Jan 27 12:53:02.617544 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 12:53:02.617556 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 27 12:53:02.617635 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 27 12:53:02.617660 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 27 12:53:02.617676 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 27 12:53:02.617688 systemd[1]: Reached target machines.target - Containers. Jan 27 12:53:02.617701 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 27 12:53:02.617712 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 27 12:53:02.617724 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 27 12:53:02.617736 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 27 12:53:02.617748 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 27 12:53:02.617762 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 27 12:53:02.617774 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 27 12:53:02.617786 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 27 12:53:02.617797 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 27 12:53:02.617809 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 27 12:53:02.617821 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 27 12:53:02.617832 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 27 12:53:02.617846 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 27 12:53:02.617857 systemd[1]: Stopped systemd-fsck-usr.service. Jan 27 12:53:02.617869 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 27 12:53:02.617881 kernel: ACPI: bus type drm_connector registered Jan 27 12:53:02.617894 kernel: fuse: init (API version 7.41) Jan 27 12:53:02.617905 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 27 12:53:02.617918 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 27 12:53:02.617930 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 27 12:53:02.617942 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 27 12:53:02.617974 systemd-journald[1232]: Collecting audit messages is enabled. Jan 27 12:53:02.617999 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 27 12:53:02.618011 systemd-journald[1232]: Journal started Jan 27 12:53:02.618030 systemd-journald[1232]: Runtime Journal (/run/log/journal/0b9bbdc1dfe244cabf927d45adfbbde3) is 6M, max 48M, 42M free. Jan 27 12:53:02.257000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 27 12:53:02.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.556000 audit: BPF prog-id=14 op=UNLOAD Jan 27 12:53:02.556000 audit: BPF prog-id=13 op=UNLOAD Jan 27 12:53:02.558000 audit: BPF prog-id=15 op=LOAD Jan 27 12:53:02.560000 audit: BPF prog-id=16 op=LOAD Jan 27 12:53:02.560000 audit: BPF prog-id=17 op=LOAD Jan 27 12:53:02.614000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 27 12:53:02.614000 audit[1232]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffdb9c24f00 a2=4000 a3=0 items=0 ppid=1 pid=1232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:02.614000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 27 12:53:01.969420 systemd[1]: Queued start job for default target multi-user.target. Jan 27 12:53:01.995405 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 27 12:53:01.996179 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 27 12:53:02.634651 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 27 12:53:02.645637 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 12:53:02.650616 systemd[1]: Started systemd-journald.service - Journal Service. Jan 27 12:53:02.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.657240 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 27 12:53:02.661485 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 27 12:53:02.665001 systemd[1]: Mounted media.mount - External Media Directory. Jan 27 12:53:02.668129 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 27 12:53:02.671696 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 27 12:53:02.675211 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 27 12:53:02.678825 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 27 12:53:02.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.682962 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 27 12:53:02.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.687273 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 27 12:53:02.687961 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 27 12:53:02.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.692042 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 27 12:53:02.692554 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 27 12:53:02.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.696894 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 27 12:53:02.697217 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 27 12:53:02.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.701083 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 27 12:53:02.701451 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 27 12:53:02.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.705754 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 27 12:53:02.706090 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 27 12:53:02.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.710184 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 27 12:53:02.710708 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 27 12:53:02.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.714551 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 27 12:53:02.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.718907 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 27 12:53:02.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.724388 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 27 12:53:02.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.729237 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 27 12:53:02.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.745107 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 27 12:53:02.749217 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 27 12:53:02.754245 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 27 12:53:02.758720 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 27 12:53:02.761984 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 27 12:53:02.762035 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 27 12:53:02.765955 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 27 12:53:02.769996 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 27 12:53:02.770146 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 27 12:53:02.776647 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 27 12:53:02.780999 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 27 12:53:02.784861 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 27 12:53:02.785983 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 27 12:53:02.789256 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 27 12:53:02.792786 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 27 12:53:02.797015 systemd-journald[1232]: Time spent on flushing to /var/log/journal/0b9bbdc1dfe244cabf927d45adfbbde3 is 20.337ms for 1188 entries. Jan 27 12:53:02.797015 systemd-journald[1232]: System Journal (/var/log/journal/0b9bbdc1dfe244cabf927d45adfbbde3) is 8M, max 163.5M, 155.5M free. Jan 27 12:53:02.827786 systemd-journald[1232]: Received client request to flush runtime journal. Jan 27 12:53:02.827822 kernel: loop1: detected capacity change from 0 to 50784 Jan 27 12:53:02.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.797835 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 27 12:53:02.802125 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 27 12:53:02.812969 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 27 12:53:02.817409 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 27 12:53:02.825454 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 27 12:53:02.831432 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 27 12:53:02.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.836538 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 27 12:53:02.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.844038 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 27 12:53:02.849778 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 27 12:53:02.856739 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 27 12:53:02.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.871107 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 27 12:53:02.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.875000 audit: BPF prog-id=18 op=LOAD Jan 27 12:53:02.875000 audit: BPF prog-id=19 op=LOAD Jan 27 12:53:02.875000 audit: BPF prog-id=20 op=LOAD Jan 27 12:53:02.877273 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 27 12:53:02.882704 kernel: loop2: detected capacity change from 0 to 219144 Jan 27 12:53:02.885000 audit: BPF prog-id=21 op=LOAD Jan 27 12:53:02.888730 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 27 12:53:02.896864 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 27 12:53:02.903000 audit: BPF prog-id=22 op=LOAD Jan 27 12:53:02.903000 audit: BPF prog-id=23 op=LOAD Jan 27 12:53:02.903000 audit: BPF prog-id=24 op=LOAD Jan 27 12:53:02.905319 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 27 12:53:02.919000 audit: BPF prog-id=25 op=LOAD Jan 27 12:53:02.919000 audit: BPF prog-id=26 op=LOAD Jan 27 12:53:02.919000 audit: BPF prog-id=27 op=LOAD Jan 27 12:53:02.921732 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 27 12:53:02.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.925854 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 27 12:53:02.937708 kernel: loop3: detected capacity change from 0 to 111560 Jan 27 12:53:02.944480 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Jan 27 12:53:02.944513 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Jan 27 12:53:02.950926 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 27 12:53:02.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.975812 systemd-nsresourced[1287]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 27 12:53:02.976644 kernel: loop4: detected capacity change from 0 to 50784 Jan 27 12:53:02.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:02.977807 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 27 12:53:02.982674 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 27 12:53:02.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:03.000414 kernel: loop5: detected capacity change from 0 to 219144 Jan 27 12:53:03.003242 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 27 12:53:03.017626 kernel: loop6: detected capacity change from 0 to 111560 Jan 27 12:53:03.028458 (sd-merge)[1294]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 27 12:53:03.032873 (sd-merge)[1294]: Merged extensions into '/usr'. Jan 27 12:53:03.038562 systemd[1]: Reload requested from client PID 1266 ('systemd-sysext') (unit systemd-sysext.service)... Jan 27 12:53:03.038732 systemd[1]: Reloading... Jan 27 12:53:03.060472 systemd-oomd[1282]: No swap; memory pressure usage will be degraded Jan 27 12:53:03.069984 systemd-resolved[1284]: Positive Trust Anchors: Jan 27 12:53:03.070022 systemd-resolved[1284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 27 12:53:03.070027 systemd-resolved[1284]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 27 12:53:03.070053 systemd-resolved[1284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 27 12:53:03.074647 systemd-resolved[1284]: Defaulting to hostname 'linux'. Jan 27 12:53:03.102653 zram_generator::config[1335]: No configuration found. Jan 27 12:53:03.316729 systemd[1]: Reloading finished in 277 ms. Jan 27 12:53:03.348129 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 27 12:53:03.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:03.352644 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 27 12:53:03.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:03.356550 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 27 12:53:03.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:03.360686 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 27 12:53:03.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:03.369426 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 27 12:53:03.396520 systemd[1]: Starting ensure-sysext.service... Jan 27 12:53:03.400284 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 27 12:53:03.403000 audit: BPF prog-id=8 op=UNLOAD Jan 27 12:53:03.403000 audit: BPF prog-id=7 op=UNLOAD Jan 27 12:53:03.404000 audit: BPF prog-id=28 op=LOAD Jan 27 12:53:03.404000 audit: BPF prog-id=29 op=LOAD Jan 27 12:53:03.406528 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 27 12:53:03.411000 audit: BPF prog-id=30 op=LOAD Jan 27 12:53:03.411000 audit: BPF prog-id=15 op=UNLOAD Jan 27 12:53:03.411000 audit: BPF prog-id=31 op=LOAD Jan 27 12:53:03.411000 audit: BPF prog-id=32 op=LOAD Jan 27 12:53:03.421071 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 27 12:53:03.421127 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 27 12:53:03.421464 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 27 12:53:03.422842 systemd-tmpfiles[1376]: ACLs are not supported, ignoring. Jan 27 12:53:03.422937 systemd-tmpfiles[1376]: ACLs are not supported, ignoring. Jan 27 12:53:03.423000 audit: BPF prog-id=16 op=UNLOAD Jan 27 12:53:03.423000 audit: BPF prog-id=17 op=UNLOAD Jan 27 12:53:03.424000 audit: BPF prog-id=33 op=LOAD Jan 27 12:53:03.424000 audit: BPF prog-id=25 op=UNLOAD Jan 27 12:53:03.424000 audit: BPF prog-id=34 op=LOAD Jan 27 12:53:03.424000 audit: BPF prog-id=35 op=LOAD Jan 27 12:53:03.424000 audit: BPF prog-id=26 op=UNLOAD Jan 27 12:53:03.424000 audit: BPF prog-id=27 op=UNLOAD Jan 27 12:53:03.425000 audit: BPF prog-id=36 op=LOAD Jan 27 12:53:03.425000 audit: BPF prog-id=22 op=UNLOAD Jan 27 12:53:03.425000 audit: BPF prog-id=37 op=LOAD Jan 27 12:53:03.425000 audit: BPF prog-id=38 op=LOAD Jan 27 12:53:03.425000 audit: BPF prog-id=23 op=UNLOAD Jan 27 12:53:03.425000 audit: BPF prog-id=24 op=UNLOAD Jan 27 12:53:03.426000 audit: BPF prog-id=39 op=LOAD Jan 27 12:53:03.426000 audit: BPF prog-id=21 op=UNLOAD Jan 27 12:53:03.429830 systemd-tmpfiles[1376]: Detected autofs mount point /boot during canonicalization of boot. Jan 27 12:53:03.429874 systemd-tmpfiles[1376]: Skipping /boot Jan 27 12:53:03.429000 audit: BPF prog-id=40 op=LOAD Jan 27 12:53:03.429000 audit: BPF prog-id=18 op=UNLOAD Jan 27 12:53:03.429000 audit: BPF prog-id=41 op=LOAD Jan 27 12:53:03.429000 audit: BPF prog-id=42 op=LOAD Jan 27 12:53:03.429000 audit: BPF prog-id=19 op=UNLOAD Jan 27 12:53:03.429000 audit: BPF prog-id=20 op=UNLOAD Jan 27 12:53:03.436449 systemd[1]: Reload requested from client PID 1375 ('systemctl') (unit ensure-sysext.service)... Jan 27 12:53:03.436489 systemd[1]: Reloading... Jan 27 12:53:03.447316 systemd-tmpfiles[1376]: Detected autofs mount point /boot during canonicalization of boot. Jan 27 12:53:03.447392 systemd-tmpfiles[1376]: Skipping /boot Jan 27 12:53:03.450320 systemd-udevd[1377]: Using default interface naming scheme 'v257'. Jan 27 12:53:03.504648 zram_generator::config[1422]: No configuration found. Jan 27 12:53:03.596686 kernel: mousedev: PS/2 mouse device common for all mice Jan 27 12:53:03.617685 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 27 12:53:03.625136 kernel: ACPI: button: Power Button [PWRF] Jan 27 12:53:03.646859 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 27 12:53:03.647297 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 27 12:53:03.647644 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 27 12:53:03.736548 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 27 12:53:03.743072 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 27 12:53:03.743250 systemd[1]: Reloading finished in 306 ms. Jan 27 12:53:03.752555 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 27 12:53:03.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:03.762000 audit: BPF prog-id=43 op=LOAD Jan 27 12:53:03.762000 audit: BPF prog-id=30 op=UNLOAD Jan 27 12:53:03.762000 audit: BPF prog-id=44 op=LOAD Jan 27 12:53:03.762000 audit: BPF prog-id=45 op=LOAD Jan 27 12:53:03.762000 audit: BPF prog-id=31 op=UNLOAD Jan 27 12:53:03.762000 audit: BPF prog-id=32 op=UNLOAD Jan 27 12:53:03.765000 audit: BPF prog-id=46 op=LOAD Jan 27 12:53:03.781000 audit: BPF prog-id=39 op=UNLOAD Jan 27 12:53:03.781000 audit: BPF prog-id=47 op=LOAD Jan 27 12:53:03.781000 audit: BPF prog-id=33 op=UNLOAD Jan 27 12:53:03.782000 audit: BPF prog-id=48 op=LOAD Jan 27 12:53:03.782000 audit: BPF prog-id=49 op=LOAD Jan 27 12:53:03.782000 audit: BPF prog-id=34 op=UNLOAD Jan 27 12:53:03.782000 audit: BPF prog-id=35 op=UNLOAD Jan 27 12:53:03.783000 audit: BPF prog-id=50 op=LOAD Jan 27 12:53:03.783000 audit: BPF prog-id=40 op=UNLOAD Jan 27 12:53:03.783000 audit: BPF prog-id=51 op=LOAD Jan 27 12:53:03.783000 audit: BPF prog-id=52 op=LOAD Jan 27 12:53:03.783000 audit: BPF prog-id=41 op=UNLOAD Jan 27 12:53:03.783000 audit: BPF prog-id=42 op=UNLOAD Jan 27 12:53:03.784000 audit: BPF prog-id=53 op=LOAD Jan 27 12:53:03.784000 audit: BPF prog-id=36 op=UNLOAD Jan 27 12:53:03.784000 audit: BPF prog-id=54 op=LOAD Jan 27 12:53:03.784000 audit: BPF prog-id=55 op=LOAD Jan 27 12:53:03.784000 audit: BPF prog-id=37 op=UNLOAD Jan 27 12:53:03.784000 audit: BPF prog-id=38 op=UNLOAD Jan 27 12:53:03.785000 audit: BPF prog-id=56 op=LOAD Jan 27 12:53:03.785000 audit: BPF prog-id=57 op=LOAD Jan 27 12:53:03.785000 audit: BPF prog-id=28 op=UNLOAD Jan 27 12:53:03.785000 audit: BPF prog-id=29 op=UNLOAD Jan 27 12:53:03.791762 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 27 12:53:03.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:03.869371 kernel: kvm_amd: TSC scaling supported Jan 27 12:53:03.869441 kernel: kvm_amd: Nested Virtualization enabled Jan 27 12:53:03.869486 kernel: kvm_amd: Nested Paging enabled Jan 27 12:53:03.872449 systemd[1]: Finished ensure-sysext.service. Jan 27 12:53:03.872714 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 27 12:53:03.872747 kernel: kvm_amd: PMU virtualization is disabled Jan 27 12:53:03.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:03.923946 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 12:53:03.925451 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 27 12:53:03.929461 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 27 12:53:03.933470 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 27 12:53:03.934713 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 27 12:53:03.938912 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 27 12:53:03.954190 kernel: EDAC MC: Ver: 3.0.0 Jan 27 12:53:03.956179 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 27 12:53:03.961733 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 27 12:53:03.965631 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 27 12:53:03.965772 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 27 12:53:03.967454 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 27 12:53:03.972764 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 27 12:53:03.977273 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 27 12:53:03.981862 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 27 12:53:03.987000 audit: BPF prog-id=58 op=LOAD Jan 27 12:53:03.990130 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 27 12:53:03.990000 audit: BPF prog-id=59 op=LOAD Jan 27 12:53:03.994085 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 27 12:53:03.997809 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 27 12:53:04.009123 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 27 12:53:04.009240 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 12:53:04.010822 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 27 12:53:04.011088 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 27 12:53:04.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:04.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:04.013507 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 27 12:53:04.014009 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 27 12:53:04.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:04.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:04.014651 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 27 12:53:04.016046 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 27 12:53:04.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:04.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:04.017647 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 27 12:53:04.017971 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 27 12:53:04.021000 audit[1512]: SYSTEM_BOOT pid=1512 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 27 12:53:04.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:04.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:04.029044 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 27 12:53:04.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:04.039507 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 27 12:53:04.039789 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 27 12:53:04.041907 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 27 12:53:04.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:04.061504 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 27 12:53:04.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:04.076000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 27 12:53:04.076000 audit[1538]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe8851f410 a2=420 a3=0 items=0 ppid=1491 pid=1538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:04.076000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 27 12:53:04.076907 augenrules[1538]: No rules Jan 27 12:53:04.077982 systemd[1]: audit-rules.service: Deactivated successfully. Jan 27 12:53:04.079013 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 27 12:53:04.100072 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 27 12:53:04.105458 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 27 12:53:04.126463 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 27 12:53:04.126927 systemd[1]: Reached target time-set.target - System Time Set. Jan 27 12:53:04.136326 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 12:53:04.139258 systemd-networkd[1508]: lo: Link UP Jan 27 12:53:04.139264 systemd-networkd[1508]: lo: Gained carrier Jan 27 12:53:04.142120 systemd-networkd[1508]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 27 12:53:04.142150 systemd-networkd[1508]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 27 12:53:04.142396 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 27 12:53:04.143238 systemd-networkd[1508]: eth0: Link UP Jan 27 12:53:04.143958 systemd-networkd[1508]: eth0: Gained carrier Jan 27 12:53:04.143975 systemd-networkd[1508]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 27 12:53:04.146187 systemd[1]: Reached target network.target - Network. Jan 27 12:53:04.151977 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 27 12:53:04.158282 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 27 12:53:04.174764 systemd-networkd[1508]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 27 12:53:04.175854 systemd-timesyncd[1509]: Network configuration changed, trying to establish connection. Jan 27 12:53:05.230479 systemd-timesyncd[1509]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 27 12:53:05.230531 systemd-timesyncd[1509]: Initial clock synchronization to Tue 2026-01-27 12:53:05.230359 UTC. Jan 27 12:53:05.231081 systemd-resolved[1284]: Clock change detected. Flushing caches. Jan 27 12:53:05.244350 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 27 12:53:05.525875 ldconfig[1498]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 27 12:53:05.532188 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 27 12:53:05.538061 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 27 12:53:05.568572 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 27 12:53:05.572344 systemd[1]: Reached target sysinit.target - System Initialization. Jan 27 12:53:05.575791 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 27 12:53:05.579614 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 27 12:53:05.583471 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 27 12:53:05.587177 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 27 12:53:05.590567 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 27 12:53:05.594454 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 27 12:53:05.598359 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 27 12:53:05.601702 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 27 12:53:05.605445 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 27 12:53:05.605520 systemd[1]: Reached target paths.target - Path Units. Jan 27 12:53:05.608251 systemd[1]: Reached target timers.target - Timer Units. Jan 27 12:53:05.611882 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 27 12:53:05.617004 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 27 12:53:05.622980 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 27 12:53:05.627121 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 27 12:53:05.631040 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 27 12:53:05.636776 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 27 12:53:05.640323 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 27 12:53:05.644737 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 27 12:53:05.648986 systemd[1]: Reached target sockets.target - Socket Units. Jan 27 12:53:05.651995 systemd[1]: Reached target basic.target - Basic System. Jan 27 12:53:05.654852 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 27 12:53:05.654973 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 27 12:53:05.656289 systemd[1]: Starting containerd.service - containerd container runtime... Jan 27 12:53:05.661174 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 27 12:53:05.676128 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 27 12:53:05.681167 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 27 12:53:05.685745 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 27 12:53:05.688761 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 27 12:53:05.700405 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 27 12:53:05.706298 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 27 12:53:05.708980 jq[1560]: false Jan 27 12:53:05.711335 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 27 12:53:05.716290 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Refreshing passwd entry cache Jan 27 12:53:05.716181 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 27 12:53:05.715525 oslogin_cache_refresh[1562]: Refreshing passwd entry cache Jan 27 12:53:05.721412 extend-filesystems[1561]: Found /dev/vda6 Jan 27 12:53:05.725539 extend-filesystems[1561]: Found /dev/vda9 Jan 27 12:53:05.726351 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 27 12:53:05.733986 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Failure getting users, quitting Jan 27 12:53:05.733986 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 27 12:53:05.733986 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Refreshing group entry cache Jan 27 12:53:05.734083 extend-filesystems[1561]: Checking size of /dev/vda9 Jan 27 12:53:05.732186 oslogin_cache_refresh[1562]: Failure getting users, quitting Jan 27 12:53:05.736621 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 27 12:53:05.732204 oslogin_cache_refresh[1562]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 27 12:53:05.732247 oslogin_cache_refresh[1562]: Refreshing group entry cache Jan 27 12:53:05.741287 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 27 12:53:05.742057 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 27 12:53:05.743278 systemd[1]: Starting update-engine.service - Update Engine... Jan 27 12:53:05.748121 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 27 12:53:05.754830 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Failure getting groups, quitting Jan 27 12:53:05.754830 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 27 12:53:05.752180 oslogin_cache_refresh[1562]: Failure getting groups, quitting Jan 27 12:53:05.755632 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 27 12:53:05.752196 oslogin_cache_refresh[1562]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 27 12:53:05.760443 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 27 12:53:05.761331 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 27 12:53:05.764192 extend-filesystems[1561]: Resized partition /dev/vda9 Jan 27 12:53:05.761835 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 27 12:53:05.769423 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 27 12:53:05.774469 extend-filesystems[1593]: resize2fs 1.47.3 (8-Jul-2025) Jan 27 12:53:05.787292 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 27 12:53:05.774828 systemd[1]: motdgen.service: Deactivated successfully. Jan 27 12:53:05.787521 jq[1582]: true Jan 27 12:53:05.775163 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 27 12:53:05.788960 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 27 12:53:05.789252 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 27 12:53:05.801549 update_engine[1579]: I20260127 12:53:05.801380 1579 main.cc:92] Flatcar Update Engine starting Jan 27 12:53:05.835987 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 27 12:53:05.845960 tar[1595]: linux-amd64/LICENSE Jan 27 12:53:05.847513 jq[1596]: true Jan 27 12:53:05.857885 extend-filesystems[1593]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 27 12:53:05.857885 extend-filesystems[1593]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 27 12:53:05.857885 extend-filesystems[1593]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 27 12:53:05.857630 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 27 12:53:05.875481 tar[1595]: linux-amd64/helm Jan 27 12:53:05.875504 extend-filesystems[1561]: Resized filesystem in /dev/vda9 Jan 27 12:53:05.858062 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 27 12:53:05.879382 systemd-logind[1575]: Watching system buttons on /dev/input/event2 (Power Button) Jan 27 12:53:05.879404 systemd-logind[1575]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 27 12:53:05.880754 systemd-logind[1575]: New seat seat0. Jan 27 12:53:05.893047 systemd[1]: Started systemd-logind.service - User Login Management. Jan 27 12:53:05.912745 dbus-daemon[1558]: [system] SELinux support is enabled Jan 27 12:53:05.913106 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 27 12:53:05.922010 update_engine[1579]: I20260127 12:53:05.920830 1579 update_check_scheduler.cc:74] Next update check in 4m20s Jan 27 12:53:05.925468 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 27 12:53:05.925498 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 27 12:53:05.930821 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 27 12:53:05.930838 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 27 12:53:05.937173 dbus-daemon[1558]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 27 12:53:05.938337 systemd[1]: Started update-engine.service - Update Engine. Jan 27 12:53:05.946271 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 27 12:53:05.957577 bash[1628]: Updated "/home/core/.ssh/authorized_keys" Jan 27 12:53:05.960705 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 27 12:53:05.968081 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 27 12:53:06.016324 locksmithd[1629]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 27 12:53:06.062458 containerd[1598]: time="2026-01-27T12:53:06Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 27 12:53:06.064368 containerd[1598]: time="2026-01-27T12:53:06.064304419Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 27 12:53:06.078954 containerd[1598]: time="2026-01-27T12:53:06.078118023Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="24.586µs" Jan 27 12:53:06.078954 containerd[1598]: time="2026-01-27T12:53:06.078190749Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 27 12:53:06.078954 containerd[1598]: time="2026-01-27T12:53:06.078257774Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 27 12:53:06.078954 containerd[1598]: time="2026-01-27T12:53:06.078288091Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 27 12:53:06.078954 containerd[1598]: time="2026-01-27T12:53:06.078551152Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 27 12:53:06.078954 containerd[1598]: time="2026-01-27T12:53:06.078566411Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 27 12:53:06.078954 containerd[1598]: time="2026-01-27T12:53:06.078624749Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 27 12:53:06.078954 containerd[1598]: time="2026-01-27T12:53:06.078634607Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 27 12:53:06.078954 containerd[1598]: time="2026-01-27T12:53:06.078867432Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 27 12:53:06.078954 containerd[1598]: time="2026-01-27T12:53:06.078880346Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 27 12:53:06.079166 containerd[1598]: time="2026-01-27T12:53:06.078889363Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 27 12:53:06.079207 containerd[1598]: time="2026-01-27T12:53:06.079195936Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 27 12:53:06.079413 containerd[1598]: time="2026-01-27T12:53:06.079396561Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 27 12:53:06.079472 containerd[1598]: time="2026-01-27T12:53:06.079459358Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 27 12:53:06.079593 containerd[1598]: time="2026-01-27T12:53:06.079578160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 27 12:53:06.079967 containerd[1598]: time="2026-01-27T12:53:06.079872349Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 27 12:53:06.080048 containerd[1598]: time="2026-01-27T12:53:06.080031897Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 27 12:53:06.080102 containerd[1598]: time="2026-01-27T12:53:06.080089976Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 27 12:53:06.080161 containerd[1598]: time="2026-01-27T12:53:06.080150168Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 27 12:53:06.080347 containerd[1598]: time="2026-01-27T12:53:06.080332137Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 27 12:53:06.080449 containerd[1598]: time="2026-01-27T12:53:06.080435721Z" level=info msg="metadata content store policy set" policy=shared Jan 27 12:53:06.088531 containerd[1598]: time="2026-01-27T12:53:06.088482658Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 27 12:53:06.088604 containerd[1598]: time="2026-01-27T12:53:06.088566033Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 27 12:53:06.088738 containerd[1598]: time="2026-01-27T12:53:06.088659378Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 27 12:53:06.088738 containerd[1598]: time="2026-01-27T12:53:06.088733766Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 27 12:53:06.088780 containerd[1598]: time="2026-01-27T12:53:06.088746410Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 27 12:53:06.088780 containerd[1598]: time="2026-01-27T12:53:06.088756028Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 27 12:53:06.088780 containerd[1598]: time="2026-01-27T12:53:06.088765225Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 27 12:53:06.088860 containerd[1598]: time="2026-01-27T12:53:06.088785734Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 27 12:53:06.088860 containerd[1598]: time="2026-01-27T12:53:06.088806833Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 27 12:53:06.088860 containerd[1598]: time="2026-01-27T12:53:06.088816771Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 27 12:53:06.088860 containerd[1598]: time="2026-01-27T12:53:06.088824646Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 27 12:53:06.088860 containerd[1598]: time="2026-01-27T12:53:06.088832881Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 27 12:53:06.088860 containerd[1598]: time="2026-01-27T12:53:06.088842189Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 27 12:53:06.088860 containerd[1598]: time="2026-01-27T12:53:06.088851466Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 27 12:53:06.089074 containerd[1598]: time="2026-01-27T12:53:06.089037433Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 27 12:53:06.089074 containerd[1598]: time="2026-01-27T12:53:06.089064183Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 27 12:53:06.089108 containerd[1598]: time="2026-01-27T12:53:06.089084712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 27 12:53:06.089108 containerd[1598]: time="2026-01-27T12:53:06.089096624Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 27 12:53:06.089148 containerd[1598]: time="2026-01-27T12:53:06.089108226Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 27 12:53:06.089148 containerd[1598]: time="2026-01-27T12:53:06.089119587Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 27 12:53:06.089179 containerd[1598]: time="2026-01-27T12:53:06.089145415Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 27 12:53:06.089179 containerd[1598]: time="2026-01-27T12:53:06.089160473Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 27 12:53:06.089179 containerd[1598]: time="2026-01-27T12:53:06.089174179Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 27 12:53:06.089227 containerd[1598]: time="2026-01-27T12:53:06.089187774Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 27 12:53:06.089227 containerd[1598]: time="2026-01-27T12:53:06.089199356Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 27 12:53:06.089227 containerd[1598]: time="2026-01-27T12:53:06.089222419Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 27 12:53:06.089420 containerd[1598]: time="2026-01-27T12:53:06.089268084Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 27 12:53:06.089420 containerd[1598]: time="2026-01-27T12:53:06.089287270Z" level=info msg="Start snapshots syncer" Jan 27 12:53:06.089420 containerd[1598]: time="2026-01-27T12:53:06.089316314Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 27 12:53:06.089864 containerd[1598]: time="2026-01-27T12:53:06.089734896Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 27 12:53:06.090107 containerd[1598]: time="2026-01-27T12:53:06.089874497Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 27 12:53:06.090107 containerd[1598]: time="2026-01-27T12:53:06.090038844Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 27 12:53:06.090149 sshd_keygen[1589]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 27 12:53:06.090341 containerd[1598]: time="2026-01-27T12:53:06.090165390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 27 12:53:06.090341 containerd[1598]: time="2026-01-27T12:53:06.090188983Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 27 12:53:06.090341 containerd[1598]: time="2026-01-27T12:53:06.090200716Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 27 12:53:06.090341 containerd[1598]: time="2026-01-27T12:53:06.090212959Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 27 12:53:06.090341 containerd[1598]: time="2026-01-27T12:53:06.090225452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 27 12:53:06.090341 containerd[1598]: time="2026-01-27T12:53:06.090238306Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 27 12:53:06.090341 containerd[1598]: time="2026-01-27T12:53:06.090263443Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 27 12:53:06.090341 containerd[1598]: time="2026-01-27T12:53:06.090275265Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 27 12:53:06.090341 containerd[1598]: time="2026-01-27T12:53:06.090286907Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 27 12:53:06.090482 containerd[1598]: time="2026-01-27T12:53:06.090357469Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 27 12:53:06.090482 containerd[1598]: time="2026-01-27T12:53:06.090372767Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 27 12:53:06.090482 containerd[1598]: time="2026-01-27T12:53:06.090383056Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 27 12:53:06.090482 containerd[1598]: time="2026-01-27T12:53:06.090394458Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 27 12:53:06.090482 containerd[1598]: time="2026-01-27T12:53:06.090404476Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 27 12:53:06.090482 containerd[1598]: time="2026-01-27T12:53:06.090468896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 27 12:53:06.090575 containerd[1598]: time="2026-01-27T12:53:06.090484506Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 27 12:53:06.090575 containerd[1598]: time="2026-01-27T12:53:06.090498993Z" level=info msg="runtime interface created" Jan 27 12:53:06.090575 containerd[1598]: time="2026-01-27T12:53:06.090507448Z" level=info msg="created NRI interface" Jan 27 12:53:06.090575 containerd[1598]: time="2026-01-27T12:53:06.090519872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 27 12:53:06.090575 containerd[1598]: time="2026-01-27T12:53:06.090532876Z" level=info msg="Connect containerd service" Jan 27 12:53:06.090575 containerd[1598]: time="2026-01-27T12:53:06.090552783Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 27 12:53:06.091498 containerd[1598]: time="2026-01-27T12:53:06.091439379Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 27 12:53:06.120850 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 27 12:53:06.130223 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 27 12:53:06.161255 systemd[1]: issuegen.service: Deactivated successfully. Jan 27 12:53:06.161601 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 27 12:53:06.170295 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 27 12:53:06.195022 containerd[1598]: time="2026-01-27T12:53:06.194960719Z" level=info msg="Start subscribing containerd event" Jan 27 12:53:06.195099 containerd[1598]: time="2026-01-27T12:53:06.195023997Z" level=info msg="Start recovering state" Jan 27 12:53:06.195131 containerd[1598]: time="2026-01-27T12:53:06.195104638Z" level=info msg="Start event monitor" Jan 27 12:53:06.195131 containerd[1598]: time="2026-01-27T12:53:06.195116380Z" level=info msg="Start cni network conf syncer for default" Jan 27 12:53:06.195131 containerd[1598]: time="2026-01-27T12:53:06.195124575Z" level=info msg="Start streaming server" Jan 27 12:53:06.195131 containerd[1598]: time="2026-01-27T12:53:06.195131468Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 27 12:53:06.195260 containerd[1598]: time="2026-01-27T12:53:06.195138922Z" level=info msg="runtime interface starting up..." Jan 27 12:53:06.195260 containerd[1598]: time="2026-01-27T12:53:06.195144592Z" level=info msg="starting plugins..." Jan 27 12:53:06.195260 containerd[1598]: time="2026-01-27T12:53:06.195158759Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 27 12:53:06.195780 containerd[1598]: time="2026-01-27T12:53:06.195756428Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 27 12:53:06.196029 containerd[1598]: time="2026-01-27T12:53:06.196010121Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 27 12:53:06.196343 systemd[1]: Started containerd.service - containerd container runtime. Jan 27 12:53:06.196600 containerd[1598]: time="2026-01-27T12:53:06.196498053Z" level=info msg="containerd successfully booted in 0.134835s" Jan 27 12:53:06.203628 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 27 12:53:06.210240 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 27 12:53:06.215468 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 27 12:53:06.219299 systemd[1]: Reached target getty.target - Login Prompts. Jan 27 12:53:06.221410 tar[1595]: linux-amd64/README.md Jan 27 12:53:06.250202 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 27 12:53:06.815250 systemd-networkd[1508]: eth0: Gained IPv6LL Jan 27 12:53:06.818961 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 27 12:53:06.823538 systemd[1]: Reached target network-online.target - Network is Online. Jan 27 12:53:06.829025 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 27 12:53:06.834364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 12:53:06.848336 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 27 12:53:06.881488 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 27 12:53:06.885437 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 27 12:53:06.886019 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 27 12:53:06.892207 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 27 12:53:07.706829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 12:53:07.711199 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 27 12:53:07.715287 systemd[1]: Startup finished in 3.637s (kernel) + 6.356s (initrd) + 5.437s (userspace) = 15.431s. Jan 27 12:53:07.737484 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 27 12:53:08.201817 kubelet[1699]: E0127 12:53:08.201652 1699 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 27 12:53:08.204543 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 27 12:53:08.204811 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 27 12:53:08.205396 systemd[1]: kubelet.service: Consumed 943ms CPU time, 257.6M memory peak. Jan 27 12:53:08.509120 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 27 12:53:08.510463 systemd[1]: Started sshd@0-10.0.0.130:22-10.0.0.1:53928.service - OpenSSH per-connection server daemon (10.0.0.1:53928). Jan 27 12:53:08.614261 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 53928 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:53:08.617252 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:53:08.625367 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 27 12:53:08.626614 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 27 12:53:08.632654 systemd-logind[1575]: New session 1 of user core. Jan 27 12:53:08.654652 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 27 12:53:08.659154 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 27 12:53:08.689818 (systemd)[1719]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:53:08.693560 systemd-logind[1575]: New session 2 of user core. Jan 27 12:53:08.844177 systemd[1719]: Queued start job for default target default.target. Jan 27 12:53:08.865541 systemd[1719]: Created slice app.slice - User Application Slice. Jan 27 12:53:08.865622 systemd[1719]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 27 12:53:08.865640 systemd[1719]: Reached target paths.target - Paths. Jan 27 12:53:08.865793 systemd[1719]: Reached target timers.target - Timers. Jan 27 12:53:08.867858 systemd[1719]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 27 12:53:08.869144 systemd[1719]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 27 12:53:08.882225 systemd[1719]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 27 12:53:08.882316 systemd[1719]: Reached target sockets.target - Sockets. Jan 27 12:53:08.885014 systemd[1719]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 27 12:53:08.885150 systemd[1719]: Reached target basic.target - Basic System. Jan 27 12:53:08.885256 systemd[1719]: Reached target default.target - Main User Target. Jan 27 12:53:08.885291 systemd[1719]: Startup finished in 183ms. Jan 27 12:53:08.885511 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 27 12:53:08.904296 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 27 12:53:08.926779 systemd[1]: Started sshd@1-10.0.0.130:22-10.0.0.1:53940.service - OpenSSH per-connection server daemon (10.0.0.1:53940). Jan 27 12:53:09.015303 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 53940 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:53:09.017410 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:53:09.024851 systemd-logind[1575]: New session 3 of user core. Jan 27 12:53:09.035186 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 27 12:53:09.054882 sshd[1737]: Connection closed by 10.0.0.1 port 53940 Jan 27 12:53:09.055228 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Jan 27 12:53:09.067523 systemd[1]: sshd@1-10.0.0.130:22-10.0.0.1:53940.service: Deactivated successfully. Jan 27 12:53:09.107274 systemd[1]: session-3.scope: Deactivated successfully. Jan 27 12:53:09.109132 systemd-logind[1575]: Session 3 logged out. Waiting for processes to exit. Jan 27 12:53:09.113383 systemd[1]: Started sshd@2-10.0.0.130:22-10.0.0.1:53952.service - OpenSSH per-connection server daemon (10.0.0.1:53952). Jan 27 12:53:09.114329 systemd-logind[1575]: Removed session 3. Jan 27 12:53:09.193664 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 53952 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:53:09.196293 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:53:09.203261 systemd-logind[1575]: New session 4 of user core. Jan 27 12:53:09.214243 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 27 12:53:09.226562 sshd[1748]: Connection closed by 10.0.0.1 port 53952 Jan 27 12:53:09.227136 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Jan 27 12:53:09.245820 systemd[1]: sshd@2-10.0.0.130:22-10.0.0.1:53952.service: Deactivated successfully. Jan 27 12:53:09.248332 systemd[1]: session-4.scope: Deactivated successfully. Jan 27 12:53:09.249888 systemd-logind[1575]: Session 4 logged out. Waiting for processes to exit. Jan 27 12:53:09.253430 systemd[1]: Started sshd@3-10.0.0.130:22-10.0.0.1:53954.service - OpenSSH per-connection server daemon (10.0.0.1:53954). Jan 27 12:53:09.254256 systemd-logind[1575]: Removed session 4. Jan 27 12:53:09.330836 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 53954 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:53:09.333068 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:53:09.340458 systemd-logind[1575]: New session 5 of user core. Jan 27 12:53:09.354204 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 27 12:53:09.372529 sshd[1758]: Connection closed by 10.0.0.1 port 53954 Jan 27 12:53:09.374202 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Jan 27 12:53:09.385280 systemd[1]: sshd@3-10.0.0.130:22-10.0.0.1:53954.service: Deactivated successfully. Jan 27 12:53:09.387881 systemd[1]: session-5.scope: Deactivated successfully. Jan 27 12:53:09.389308 systemd-logind[1575]: Session 5 logged out. Waiting for processes to exit. Jan 27 12:53:09.392421 systemd[1]: Started sshd@4-10.0.0.130:22-10.0.0.1:53962.service - OpenSSH per-connection server daemon (10.0.0.1:53962). Jan 27 12:53:09.393511 systemd-logind[1575]: Removed session 5. Jan 27 12:53:09.459254 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 53962 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:53:09.461282 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:53:09.468342 systemd-logind[1575]: New session 6 of user core. Jan 27 12:53:09.482115 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 27 12:53:09.509438 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 27 12:53:09.510098 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 27 12:53:09.524573 sudo[1769]: pam_unix(sudo:session): session closed for user root Jan 27 12:53:09.526310 sshd[1768]: Connection closed by 10.0.0.1 port 53962 Jan 27 12:53:09.526975 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Jan 27 12:53:09.543068 systemd[1]: sshd@4-10.0.0.130:22-10.0.0.1:53962.service: Deactivated successfully. Jan 27 12:53:09.545208 systemd[1]: session-6.scope: Deactivated successfully. Jan 27 12:53:09.546421 systemd-logind[1575]: Session 6 logged out. Waiting for processes to exit. Jan 27 12:53:09.549534 systemd[1]: Started sshd@5-10.0.0.130:22-10.0.0.1:53964.service - OpenSSH per-connection server daemon (10.0.0.1:53964). Jan 27 12:53:09.550572 systemd-logind[1575]: Removed session 6. Jan 27 12:53:09.631091 sshd[1776]: Accepted publickey for core from 10.0.0.1 port 53964 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:53:09.633182 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:53:09.640411 systemd-logind[1575]: New session 7 of user core. Jan 27 12:53:09.651071 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 27 12:53:09.671773 sudo[1782]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 27 12:53:09.672245 sudo[1782]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 27 12:53:09.678612 sudo[1782]: pam_unix(sudo:session): session closed for user root Jan 27 12:53:09.688091 sudo[1781]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 27 12:53:09.688491 sudo[1781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 27 12:53:09.698335 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 27 12:53:09.760000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 27 12:53:09.762148 augenrules[1806]: No rules Jan 27 12:53:09.764219 kernel: kauditd_printk_skb: 133 callbacks suppressed Jan 27 12:53:09.764263 kernel: audit: type=1305 audit(1769518389.760:227): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 27 12:53:09.765362 systemd[1]: audit-rules.service: Deactivated successfully. Jan 27 12:53:09.765818 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 27 12:53:09.767243 sudo[1781]: pam_unix(sudo:session): session closed for user root Jan 27 12:53:09.769231 sshd[1780]: Connection closed by 10.0.0.1 port 53964 Jan 27 12:53:09.769775 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Jan 27 12:53:09.760000 audit[1806]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc97c2b4f0 a2=420 a3=0 items=0 ppid=1787 pid=1806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:09.783710 kernel: audit: type=1300 audit(1769518389.760:227): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc97c2b4f0 a2=420 a3=0 items=0 ppid=1787 pid=1806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:09.760000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 27 12:53:09.790292 kernel: audit: type=1327 audit(1769518389.760:227): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 27 12:53:09.790387 kernel: audit: type=1130 audit(1769518389.765:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:09.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:09.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:09.810762 kernel: audit: type=1131 audit(1769518389.765:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:09.810794 kernel: audit: type=1106 audit(1769518389.766:230): pid=1781 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 12:53:09.766000 audit[1781]: USER_END pid=1781 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 12:53:09.819614 kernel: audit: type=1104 audit(1769518389.766:231): pid=1781 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 12:53:09.766000 audit[1781]: CRED_DISP pid=1781 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 12:53:09.827379 kernel: audit: type=1106 audit(1769518389.770:232): pid=1776 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:53:09.770000 audit[1776]: USER_END pid=1776 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:53:09.839042 kernel: audit: type=1104 audit(1769518389.770:233): pid=1776 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:53:09.770000 audit[1776]: CRED_DISP pid=1776 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:53:09.857046 systemd[1]: sshd@5-10.0.0.130:22-10.0.0.1:53964.service: Deactivated successfully. Jan 27 12:53:09.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.130:22-10.0.0.1:53964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:09.859158 systemd[1]: session-7.scope: Deactivated successfully. Jan 27 12:53:09.860395 systemd-logind[1575]: Session 7 logged out. Waiting for processes to exit. Jan 27 12:53:09.863320 systemd[1]: Started sshd@6-10.0.0.130:22-10.0.0.1:53980.service - OpenSSH per-connection server daemon (10.0.0.1:53980). Jan 27 12:53:09.864114 systemd-logind[1575]: Removed session 7. Jan 27 12:53:09.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.130:22-10.0.0.1:53980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:09.867976 kernel: audit: type=1131 audit(1769518389.856:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.130:22-10.0.0.1:53964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:09.917000 audit[1815]: USER_ACCT pid=1815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:53:09.918724 sshd[1815]: Accepted publickey for core from 10.0.0.1 port 53980 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:53:09.918000 audit[1815]: CRED_ACQ pid=1815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:53:09.918000 audit[1815]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf2373c30 a2=3 a3=0 items=0 ppid=1 pid=1815 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:09.918000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:53:09.920522 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:53:09.927358 systemd-logind[1575]: New session 8 of user core. Jan 27 12:53:09.938156 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 27 12:53:09.941000 audit[1815]: USER_START pid=1815 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:53:09.943000 audit[1819]: CRED_ACQ pid=1819 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:53:09.958000 audit[1820]: USER_ACCT pid=1820 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 12:53:09.959828 sudo[1820]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 27 12:53:09.959000 audit[1820]: CRED_REFR pid=1820 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 12:53:09.960319 sudo[1820]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 27 12:53:09.959000 audit[1820]: USER_START pid=1820 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 12:53:10.349751 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 27 12:53:10.371266 (dockerd)[1841]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 27 12:53:10.649957 dockerd[1841]: time="2026-01-27T12:53:10.649749850Z" level=info msg="Starting up" Jan 27 12:53:10.650934 dockerd[1841]: time="2026-01-27T12:53:10.650840321Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 27 12:53:10.667445 dockerd[1841]: time="2026-01-27T12:53:10.667354700Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 27 12:53:10.876874 dockerd[1841]: time="2026-01-27T12:53:10.876727614Z" level=info msg="Loading containers: start." Jan 27 12:53:10.889977 kernel: Initializing XFRM netlink socket Jan 27 12:53:10.977000 audit[1895]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1895 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:10.977000 audit[1895]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffa77e63b0 a2=0 a3=0 items=0 ppid=1841 pid=1895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:10.977000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 27 12:53:10.981000 audit[1897]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1897 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:10.981000 audit[1897]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff7a86e470 a2=0 a3=0 items=0 ppid=1841 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:10.981000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 27 12:53:10.985000 audit[1899]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1899 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:10.985000 audit[1899]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc2baf590 a2=0 a3=0 items=0 ppid=1841 pid=1899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:10.985000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 27 12:53:10.990000 audit[1901]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:10.990000 audit[1901]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd5ee40be0 a2=0 a3=0 items=0 ppid=1841 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:10.990000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 27 12:53:10.993000 audit[1903]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:10.993000 audit[1903]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffda2b24040 a2=0 a3=0 items=0 ppid=1841 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:10.993000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 27 12:53:10.997000 audit[1905]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1905 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:10.997000 audit[1905]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe293a4ac0 a2=0 a3=0 items=0 ppid=1841 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:10.997000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 27 12:53:11.001000 audit[1907]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:11.001000 audit[1907]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffbbfd55c0 a2=0 a3=0 items=0 ppid=1841 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.001000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 27 12:53:11.005000 audit[1909]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:11.005000 audit[1909]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc543dc8f0 a2=0 a3=0 items=0 ppid=1841 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.005000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 27 12:53:11.057000 audit[1912]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1912 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:11.057000 audit[1912]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fffb5d89680 a2=0 a3=0 items=0 ppid=1841 pid=1912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.057000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 27 12:53:11.061000 audit[1914]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:11.061000 audit[1914]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffee2b0b490 a2=0 a3=0 items=0 ppid=1841 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.061000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 27 12:53:11.065000 audit[1916]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1916 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:11.065000 audit[1916]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffdf3c366d0 a2=0 a3=0 items=0 ppid=1841 pid=1916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.065000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 27 12:53:11.069000 audit[1918]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:11.069000 audit[1918]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc941210b0 a2=0 a3=0 items=0 ppid=1841 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.069000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 27 12:53:11.074000 audit[1920]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:11.074000 audit[1920]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe8b3a4790 a2=0 a3=0 items=0 ppid=1841 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.074000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 27 12:53:11.141000 audit[1950]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1950 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:11.141000 audit[1950]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffef5003340 a2=0 a3=0 items=0 ppid=1841 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.141000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 27 12:53:11.145000 audit[1952]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1952 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:11.145000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc89f44140 a2=0 a3=0 items=0 ppid=1841 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.145000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 27 12:53:11.148000 audit[1954]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1954 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:11.148000 audit[1954]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd31f4700 a2=0 a3=0 items=0 ppid=1841 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.148000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 27 12:53:11.152000 audit[1956]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:11.152000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6d021fa0 a2=0 a3=0 items=0 ppid=1841 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.152000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 27 12:53:11.156000 audit[1958]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1958 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:11.156000 audit[1958]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffde24cf200 a2=0 a3=0 items=0 ppid=1841 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.156000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 27 12:53:11.159000 audit[1960]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1960 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:11.159000 audit[1960]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff28220330 a2=0 a3=0 items=0 ppid=1841 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.159000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 27 12:53:11.163000 audit[1962]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:11.163000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc8e406200 a2=0 a3=0 items=0 ppid=1841 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.163000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 27 12:53:11.167000 audit[1964]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:11.167000 audit[1964]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe798ddbb0 a2=0 a3=0 items=0 ppid=1841 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.167000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 27 12:53:11.172000 audit[1966]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:11.172000 audit[1966]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffccf5650e0 a2=0 a3=0 items=0 ppid=1841 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.172000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 27 12:53:11.175000 audit[1968]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:11.175000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffe6d22180 a2=0 a3=0 items=0 ppid=1841 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.175000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 27 12:53:11.179000 audit[1970]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:11.179000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff55f34460 a2=0 a3=0 items=0 ppid=1841 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.179000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 27 12:53:11.183000 audit[1972]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:11.183000 audit[1972]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffc996fcd0 a2=0 a3=0 items=0 ppid=1841 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.183000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 27 12:53:11.187000 audit[1974]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:11.187000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc51389ec0 a2=0 a3=0 items=0 ppid=1841 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.187000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 27 12:53:11.197000 audit[1979]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:11.197000 audit[1979]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffad710540 a2=0 a3=0 items=0 ppid=1841 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.197000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 27 12:53:11.201000 audit[1981]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1981 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:11.201000 audit[1981]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffec191b650 a2=0 a3=0 items=0 ppid=1841 pid=1981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.201000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 27 12:53:11.205000 audit[1983]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1983 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:11.205000 audit[1983]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc64fceaf0 a2=0 a3=0 items=0 ppid=1841 pid=1983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.205000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 27 12:53:11.209000 audit[1985]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1985 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:11.209000 audit[1985]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd59b57cb0 a2=0 a3=0 items=0 ppid=1841 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.209000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 27 12:53:11.214000 audit[1987]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1987 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:11.214000 audit[1987]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe1d5d2050 a2=0 a3=0 items=0 ppid=1841 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.214000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 27 12:53:11.218000 audit[1989]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=1989 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:11.218000 audit[1989]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc4a221090 a2=0 a3=0 items=0 ppid=1841 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.218000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 27 12:53:11.240000 audit[1993]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=1993 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:11.240000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffeea6b28e0 a2=0 a3=0 items=0 ppid=1841 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.240000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 27 12:53:11.246000 audit[1995]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=1995 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:11.246000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fffd13c13a0 a2=0 a3=0 items=0 ppid=1841 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.246000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 27 12:53:11.265000 audit[2003]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:11.265000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffd6a8dab80 a2=0 a3=0 items=0 ppid=1841 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.265000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 27 12:53:11.281000 audit[2009]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:11.281000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffeda1403d0 a2=0 a3=0 items=0 ppid=1841 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.281000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 27 12:53:11.286000 audit[2011]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:11.286000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffeb96264a0 a2=0 a3=0 items=0 ppid=1841 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.286000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 27 12:53:11.290000 audit[2013]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:11.290000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffec47d61d0 a2=0 a3=0 items=0 ppid=1841 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.290000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 27 12:53:11.295000 audit[2015]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:11.295000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffec4da9890 a2=0 a3=0 items=0 ppid=1841 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.295000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 27 12:53:11.299000 audit[2017]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:11.299000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdbce894b0 a2=0 a3=0 items=0 ppid=1841 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:11.299000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 27 12:53:11.301193 systemd-networkd[1508]: docker0: Link UP Jan 27 12:53:11.307510 dockerd[1841]: time="2026-01-27T12:53:11.307432357Z" level=info msg="Loading containers: done." Jan 27 12:53:11.325309 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2181180958-merged.mount: Deactivated successfully. Jan 27 12:53:11.330160 dockerd[1841]: time="2026-01-27T12:53:11.330085962Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 27 12:53:11.330245 dockerd[1841]: time="2026-01-27T12:53:11.330212307Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 27 12:53:11.330376 dockerd[1841]: time="2026-01-27T12:53:11.330319848Z" level=info msg="Initializing buildkit" Jan 27 12:53:11.374855 dockerd[1841]: time="2026-01-27T12:53:11.374808872Z" level=info msg="Completed buildkit initialization" Jan 27 12:53:11.380764 dockerd[1841]: time="2026-01-27T12:53:11.380611187Z" level=info msg="Daemon has completed initialization" Jan 27 12:53:11.380972 dockerd[1841]: time="2026-01-27T12:53:11.380817723Z" level=info msg="API listen on /run/docker.sock" Jan 27 12:53:11.381144 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 27 12:53:11.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:12.096825 containerd[1598]: time="2026-01-27T12:53:12.096792007Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 27 12:53:12.625102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2747778727.mount: Deactivated successfully. Jan 27 12:53:13.437991 containerd[1598]: time="2026-01-27T12:53:13.437775243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:13.439010 containerd[1598]: time="2026-01-27T12:53:13.438951291Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=0" Jan 27 12:53:13.440328 containerd[1598]: time="2026-01-27T12:53:13.440277548Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:13.443086 containerd[1598]: time="2026-01-27T12:53:13.442994055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:13.443924 containerd[1598]: time="2026-01-27T12:53:13.443867282Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 1.347040068s" Jan 27 12:53:13.443983 containerd[1598]: time="2026-01-27T12:53:13.443974141Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 27 12:53:13.444994 containerd[1598]: time="2026-01-27T12:53:13.444444444Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 27 12:53:14.372364 containerd[1598]: time="2026-01-27T12:53:14.372168472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:14.373502 containerd[1598]: time="2026-01-27T12:53:14.373466351Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=0" Jan 27 12:53:14.374871 containerd[1598]: time="2026-01-27T12:53:14.374809289Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:14.378411 containerd[1598]: time="2026-01-27T12:53:14.378330623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:14.379302 containerd[1598]: time="2026-01-27T12:53:14.379244260Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 934.776192ms" Jan 27 12:53:14.379302 containerd[1598]: time="2026-01-27T12:53:14.379285957Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 27 12:53:14.380071 containerd[1598]: time="2026-01-27T12:53:14.380032662Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 27 12:53:15.155806 containerd[1598]: time="2026-01-27T12:53:15.155406116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:15.157876 containerd[1598]: time="2026-01-27T12:53:15.157848326Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=0" Jan 27 12:53:15.161267 containerd[1598]: time="2026-01-27T12:53:15.161213503Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:15.164751 containerd[1598]: time="2026-01-27T12:53:15.164565011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:15.165668 containerd[1598]: time="2026-01-27T12:53:15.165586388Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 785.508151ms" Jan 27 12:53:15.165668 containerd[1598]: time="2026-01-27T12:53:15.165639227Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 27 12:53:15.166489 containerd[1598]: time="2026-01-27T12:53:15.166421668Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 27 12:53:16.186014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2757158159.mount: Deactivated successfully. Jan 27 12:53:16.455778 containerd[1598]: time="2026-01-27T12:53:16.455548798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:16.456997 containerd[1598]: time="2026-01-27T12:53:16.456747308Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=14375786" Jan 27 12:53:16.458189 containerd[1598]: time="2026-01-27T12:53:16.458129609Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:16.461316 containerd[1598]: time="2026-01-27T12:53:16.461253282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:16.462052 containerd[1598]: time="2026-01-27T12:53:16.461969637Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.295420773s" Jan 27 12:53:16.462052 containerd[1598]: time="2026-01-27T12:53:16.462033887Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 27 12:53:16.462864 containerd[1598]: time="2026-01-27T12:53:16.462745168Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 27 12:53:16.870672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2697685054.mount: Deactivated successfully. Jan 27 12:53:17.843204 containerd[1598]: time="2026-01-27T12:53:17.843007422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:17.844530 containerd[1598]: time="2026-01-27T12:53:17.844402244Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=21568704" Jan 27 12:53:17.845920 containerd[1598]: time="2026-01-27T12:53:17.845826227Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:17.849400 containerd[1598]: time="2026-01-27T12:53:17.849281733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:17.850942 containerd[1598]: time="2026-01-27T12:53:17.850850402Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.388052515s" Jan 27 12:53:17.851071 containerd[1598]: time="2026-01-27T12:53:17.851011913Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 27 12:53:17.852222 containerd[1598]: time="2026-01-27T12:53:17.851780666Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 27 12:53:18.455545 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 27 12:53:18.458234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 12:53:18.695342 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 12:53:18.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:18.699018 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 27 12:53:18.699073 kernel: audit: type=1130 audit(1769518398.694:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:18.708677 (kubelet)[2197]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 27 12:53:18.748335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1547599973.mount: Deactivated successfully. Jan 27 12:53:18.758601 containerd[1598]: time="2026-01-27T12:53:18.758557305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:18.759999 containerd[1598]: time="2026-01-27T12:53:18.759884494Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 27 12:53:18.761918 containerd[1598]: time="2026-01-27T12:53:18.761866755Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:18.765299 containerd[1598]: time="2026-01-27T12:53:18.765225622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:18.766132 containerd[1598]: time="2026-01-27T12:53:18.766021914Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 913.973157ms" Jan 27 12:53:18.766132 containerd[1598]: time="2026-01-27T12:53:18.766101312Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 27 12:53:18.766203 kubelet[2197]: E0127 12:53:18.766094 2197 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 27 12:53:18.766971 containerd[1598]: time="2026-01-27T12:53:18.766879065Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 27 12:53:18.773061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 27 12:53:18.773310 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 27 12:53:18.773831 systemd[1]: kubelet.service: Consumed 250ms CPU time, 110.7M memory peak. Jan 27 12:53:18.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 27 12:53:18.784999 kernel: audit: type=1131 audit(1769518398.772:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 27 12:53:19.216612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2969763496.mount: Deactivated successfully. Jan 27 12:53:21.752528 containerd[1598]: time="2026-01-27T12:53:21.752403732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:21.754001 containerd[1598]: time="2026-01-27T12:53:21.753952773Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=61186606" Jan 27 12:53:21.755548 containerd[1598]: time="2026-01-27T12:53:21.755401148Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:21.759426 containerd[1598]: time="2026-01-27T12:53:21.759373097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:21.760646 containerd[1598]: time="2026-01-27T12:53:21.760587392Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.99336733s" Jan 27 12:53:21.760646 containerd[1598]: time="2026-01-27T12:53:21.760612459Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 27 12:53:25.982981 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 12:53:25.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:25.983294 systemd[1]: kubelet.service: Consumed 250ms CPU time, 110.7M memory peak. Jan 27 12:53:25.986605 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 12:53:25.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:26.007512 kernel: audit: type=1130 audit(1769518405.982:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:26.007598 kernel: audit: type=1131 audit(1769518405.982:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:26.019101 systemd[1]: Reload requested from client PID 2293 ('systemctl') (unit session-8.scope)... Jan 27 12:53:26.019118 systemd[1]: Reloading... Jan 27 12:53:26.114021 zram_generator::config[2342]: No configuration found. Jan 27 12:53:26.357770 systemd[1]: Reloading finished in 338 ms. Jan 27 12:53:26.391000 audit: BPF prog-id=63 op=LOAD Jan 27 12:53:26.397958 kernel: audit: type=1334 audit(1769518406.391:289): prog-id=63 op=LOAD Jan 27 12:53:26.398022 kernel: audit: type=1334 audit(1769518406.391:290): prog-id=43 op=UNLOAD Jan 27 12:53:26.391000 audit: BPF prog-id=43 op=UNLOAD Jan 27 12:53:26.391000 audit: BPF prog-id=64 op=LOAD Jan 27 12:53:26.400656 kernel: audit: type=1334 audit(1769518406.391:291): prog-id=64 op=LOAD Jan 27 12:53:26.400739 kernel: audit: type=1334 audit(1769518406.391:292): prog-id=65 op=LOAD Jan 27 12:53:26.391000 audit: BPF prog-id=65 op=LOAD Jan 27 12:53:26.403188 kernel: audit: type=1334 audit(1769518406.391:293): prog-id=44 op=UNLOAD Jan 27 12:53:26.391000 audit: BPF prog-id=44 op=UNLOAD Jan 27 12:53:26.405767 kernel: audit: type=1334 audit(1769518406.391:294): prog-id=45 op=UNLOAD Jan 27 12:53:26.391000 audit: BPF prog-id=45 op=UNLOAD Jan 27 12:53:26.408303 kernel: audit: type=1334 audit(1769518406.392:295): prog-id=66 op=LOAD Jan 27 12:53:26.392000 audit: BPF prog-id=66 op=LOAD Jan 27 12:53:26.410820 kernel: audit: type=1334 audit(1769518406.392:296): prog-id=47 op=UNLOAD Jan 27 12:53:26.392000 audit: BPF prog-id=47 op=UNLOAD Jan 27 12:53:26.392000 audit: BPF prog-id=67 op=LOAD Jan 27 12:53:26.392000 audit: BPF prog-id=68 op=LOAD Jan 27 12:53:26.392000 audit: BPF prog-id=48 op=UNLOAD Jan 27 12:53:26.392000 audit: BPF prog-id=49 op=UNLOAD Jan 27 12:53:26.395000 audit: BPF prog-id=69 op=LOAD Jan 27 12:53:26.395000 audit: BPF prog-id=58 op=UNLOAD Jan 27 12:53:26.397000 audit: BPF prog-id=70 op=LOAD Jan 27 12:53:26.397000 audit: BPF prog-id=46 op=UNLOAD Jan 27 12:53:26.419000 audit: BPF prog-id=71 op=LOAD Jan 27 12:53:26.419000 audit: BPF prog-id=50 op=UNLOAD Jan 27 12:53:26.419000 audit: BPF prog-id=72 op=LOAD Jan 27 12:53:26.419000 audit: BPF prog-id=73 op=LOAD Jan 27 12:53:26.419000 audit: BPF prog-id=51 op=UNLOAD Jan 27 12:53:26.419000 audit: BPF prog-id=52 op=UNLOAD Jan 27 12:53:26.420000 audit: BPF prog-id=74 op=LOAD Jan 27 12:53:26.420000 audit: BPF prog-id=75 op=LOAD Jan 27 12:53:26.420000 audit: BPF prog-id=56 op=UNLOAD Jan 27 12:53:26.420000 audit: BPF prog-id=57 op=UNLOAD Jan 27 12:53:26.422000 audit: BPF prog-id=76 op=LOAD Jan 27 12:53:26.422000 audit: BPF prog-id=60 op=UNLOAD Jan 27 12:53:26.422000 audit: BPF prog-id=77 op=LOAD Jan 27 12:53:26.422000 audit: BPF prog-id=78 op=LOAD Jan 27 12:53:26.422000 audit: BPF prog-id=61 op=UNLOAD Jan 27 12:53:26.422000 audit: BPF prog-id=62 op=UNLOAD Jan 27 12:53:26.423000 audit: BPF prog-id=79 op=LOAD Jan 27 12:53:26.423000 audit: BPF prog-id=53 op=UNLOAD Jan 27 12:53:26.423000 audit: BPF prog-id=80 op=LOAD Jan 27 12:53:26.423000 audit: BPF prog-id=81 op=LOAD Jan 27 12:53:26.423000 audit: BPF prog-id=54 op=UNLOAD Jan 27 12:53:26.423000 audit: BPF prog-id=55 op=UNLOAD Jan 27 12:53:26.424000 audit: BPF prog-id=82 op=LOAD Jan 27 12:53:26.424000 audit: BPF prog-id=59 op=UNLOAD Jan 27 12:53:26.440740 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 27 12:53:26.440859 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 27 12:53:26.441363 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 12:53:26.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 27 12:53:26.444069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 12:53:26.660041 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 12:53:26.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:26.665025 (kubelet)[2386]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 27 12:53:26.717811 kubelet[2386]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 27 12:53:26.717811 kubelet[2386]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 12:53:26.717811 kubelet[2386]: I0127 12:53:26.717794 2386 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 12:53:27.200634 kubelet[2386]: I0127 12:53:27.200523 2386 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 27 12:53:27.200634 kubelet[2386]: I0127 12:53:27.200599 2386 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 12:53:27.200634 kubelet[2386]: I0127 12:53:27.200626 2386 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 27 12:53:27.200634 kubelet[2386]: I0127 12:53:27.200636 2386 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 27 12:53:27.200984 kubelet[2386]: I0127 12:53:27.200883 2386 server.go:956] "Client rotation is on, will bootstrap in background" Jan 27 12:53:27.207329 kubelet[2386]: E0127 12:53:27.207174 2386 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 27 12:53:27.207391 kubelet[2386]: I0127 12:53:27.207352 2386 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 27 12:53:27.214971 kubelet[2386]: I0127 12:53:27.213291 2386 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 12:53:27.220619 kubelet[2386]: I0127 12:53:27.220587 2386 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 27 12:53:27.221622 kubelet[2386]: I0127 12:53:27.221563 2386 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 12:53:27.221811 kubelet[2386]: I0127 12:53:27.221610 2386 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 12:53:27.221811 kubelet[2386]: I0127 12:53:27.221803 2386 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 12:53:27.221811 kubelet[2386]: I0127 12:53:27.221812 2386 container_manager_linux.go:306] "Creating device plugin manager" Jan 27 12:53:27.222045 kubelet[2386]: I0127 12:53:27.221956 2386 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 27 12:53:27.225105 kubelet[2386]: I0127 12:53:27.225053 2386 state_mem.go:36] "Initialized new in-memory state store" Jan 27 12:53:27.225873 kubelet[2386]: I0127 12:53:27.225828 2386 kubelet.go:475] "Attempting to sync node with API server" Jan 27 12:53:27.225873 kubelet[2386]: I0127 12:53:27.225868 2386 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 12:53:27.226000 kubelet[2386]: I0127 12:53:27.225974 2386 kubelet.go:387] "Adding apiserver pod source" Jan 27 12:53:27.226032 kubelet[2386]: I0127 12:53:27.226007 2386 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 12:53:27.226390 kubelet[2386]: E0127 12:53:27.226324 2386 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 27 12:53:27.227796 kubelet[2386]: E0127 12:53:27.226422 2386 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 27 12:53:27.230038 kubelet[2386]: I0127 12:53:27.228888 2386 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 27 12:53:27.230038 kubelet[2386]: I0127 12:53:27.229313 2386 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 27 12:53:27.230038 kubelet[2386]: I0127 12:53:27.229337 2386 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 27 12:53:27.230038 kubelet[2386]: W0127 12:53:27.229376 2386 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 27 12:53:27.233454 kubelet[2386]: I0127 12:53:27.233384 2386 server.go:1262] "Started kubelet" Jan 27 12:53:27.235177 kubelet[2386]: I0127 12:53:27.235102 2386 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 12:53:27.236348 kubelet[2386]: I0127 12:53:27.236248 2386 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 12:53:27.236348 kubelet[2386]: I0127 12:53:27.236321 2386 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 27 12:53:27.236628 kubelet[2386]: I0127 12:53:27.236554 2386 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 12:53:27.236772 kubelet[2386]: I0127 12:53:27.236664 2386 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 12:53:27.237804 kubelet[2386]: I0127 12:53:27.237700 2386 server.go:310] "Adding debug handlers to kubelet server" Jan 27 12:53:27.238207 kubelet[2386]: E0127 12:53:27.236881 2386 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.130:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.130:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188e979db6a82c60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-27 12:53:27.233317984 +0000 UTC m=+0.562943216,LastTimestamp:2026-01-27 12:53:27.233317984 +0000 UTC m=+0.562943216,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 27 12:53:27.239028 kubelet[2386]: I0127 12:53:27.238881 2386 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 27 12:53:27.239480 kubelet[2386]: E0127 12:53:27.239396 2386 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 27 12:53:27.239480 kubelet[2386]: I0127 12:53:27.239469 2386 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 27 12:53:27.239738 kubelet[2386]: I0127 12:53:27.239625 2386 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 12:53:27.240031 kubelet[2386]: I0127 12:53:27.239802 2386 reconciler.go:29] "Reconciler: start to sync state" Jan 27 12:53:27.242346 kubelet[2386]: E0127 12:53:27.241348 2386 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 27 12:53:27.242346 kubelet[2386]: E0127 12:53:27.241416 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="200ms" Jan 27 12:53:27.243054 kubelet[2386]: E0127 12:53:27.242992 2386 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 27 12:53:27.243455 kubelet[2386]: I0127 12:53:27.243396 2386 factory.go:223] Registration of the systemd container factory successfully Jan 27 12:53:27.243582 kubelet[2386]: I0127 12:53:27.243548 2386 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 27 12:53:27.244671 kubelet[2386]: I0127 12:53:27.244620 2386 factory.go:223] Registration of the containerd container factory successfully Jan 27 12:53:27.248000 audit[2403]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2403 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:27.248000 audit[2403]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcc7e45f40 a2=0 a3=0 items=0 ppid=2386 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:27.248000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 27 12:53:27.252000 audit[2404]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2404 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:27.252000 audit[2404]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed48b2970 a2=0 a3=0 items=0 ppid=2386 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:27.252000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 27 12:53:27.259000 audit[2409]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2409 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:27.259000 audit[2409]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffeb67e9190 a2=0 a3=0 items=0 ppid=2386 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:27.259000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 27 12:53:27.264000 audit[2413]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2413 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:27.264000 audit[2413]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd540c0d90 a2=0 a3=0 items=0 ppid=2386 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:27.264000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 27 12:53:27.265980 kubelet[2386]: I0127 12:53:27.265882 2386 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 27 12:53:27.266127 kubelet[2386]: I0127 12:53:27.265983 2386 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 27 12:53:27.266127 kubelet[2386]: I0127 12:53:27.266004 2386 state_mem.go:36] "Initialized new in-memory state store" Jan 27 12:53:27.268971 kubelet[2386]: I0127 12:53:27.268820 2386 policy_none.go:49] "None policy: Start" Jan 27 12:53:27.268971 kubelet[2386]: I0127 12:53:27.268868 2386 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 27 12:53:27.269062 kubelet[2386]: I0127 12:53:27.268885 2386 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 27 12:53:27.270626 kubelet[2386]: I0127 12:53:27.270607 2386 policy_none.go:47] "Start" Jan 27 12:53:27.276000 audit[2416]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2416 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:27.276000 audit[2416]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe4218a650 a2=0 a3=0 items=0 ppid=2386 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:27.276000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Jan 27 12:53:27.278475 kubelet[2386]: I0127 12:53:27.278223 2386 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 27 12:53:27.280000 audit[2419]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2419 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:27.280000 audit[2419]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe74cdb020 a2=0 a3=0 items=0 ppid=2386 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:27.280000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 27 12:53:27.280000 audit[2418]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2418 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:27.280000 audit[2418]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffda3ade1b0 a2=0 a3=0 items=0 ppid=2386 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:27.280000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 27 12:53:27.282814 kubelet[2386]: I0127 12:53:27.282209 2386 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 27 12:53:27.282814 kubelet[2386]: I0127 12:53:27.282223 2386 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 27 12:53:27.282814 kubelet[2386]: I0127 12:53:27.282246 2386 kubelet.go:2427] "Starting kubelet main sync loop" Jan 27 12:53:27.282814 kubelet[2386]: E0127 12:53:27.282285 2386 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 12:53:27.281752 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 27 12:53:27.284985 kubelet[2386]: E0127 12:53:27.284313 2386 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 27 12:53:27.284000 audit[2420]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2420 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:27.284000 audit[2420]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeeb060fd0 a2=0 a3=0 items=0 ppid=2386 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:27.284000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 27 12:53:27.285000 audit[2421]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2421 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:27.285000 audit[2421]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd46dfef80 a2=0 a3=0 items=0 ppid=2386 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:27.285000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 27 12:53:27.286000 audit[2423]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2423 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:27.286000 audit[2423]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca39b1e30 a2=0 a3=0 items=0 ppid=2386 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:27.286000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 27 12:53:27.288000 audit[2424]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2424 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:27.288000 audit[2424]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff622fcfd0 a2=0 a3=0 items=0 ppid=2386 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:27.288000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 27 12:53:27.288000 audit[2425]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2425 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:27.288000 audit[2425]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd84647340 a2=0 a3=0 items=0 ppid=2386 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:27.288000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 27 12:53:27.293203 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 27 12:53:27.298379 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 27 12:53:27.319304 kubelet[2386]: E0127 12:53:27.319234 2386 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 27 12:53:27.319653 kubelet[2386]: I0127 12:53:27.319553 2386 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 12:53:27.319653 kubelet[2386]: I0127 12:53:27.319569 2386 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 12:53:27.319963 kubelet[2386]: I0127 12:53:27.319859 2386 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 12:53:27.321312 kubelet[2386]: E0127 12:53:27.321178 2386 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 27 12:53:27.321312 kubelet[2386]: E0127 12:53:27.321283 2386 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 27 12:53:27.396648 systemd[1]: Created slice kubepods-burstable-pod9399c91434089834b56835770d5faa10.slice - libcontainer container kubepods-burstable-pod9399c91434089834b56835770d5faa10.slice. Jan 27 12:53:27.413180 kubelet[2386]: E0127 12:53:27.413100 2386 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 27 12:53:27.417607 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 27 12:53:27.420986 kubelet[2386]: I0127 12:53:27.420874 2386 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 27 12:53:27.421272 kubelet[2386]: E0127 12:53:27.421240 2386 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Jan 27 12:53:27.435599 kubelet[2386]: E0127 12:53:27.435510 2386 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 27 12:53:27.438865 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 27 12:53:27.440225 kubelet[2386]: I0127 12:53:27.440083 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9399c91434089834b56835770d5faa10-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9399c91434089834b56835770d5faa10\") " pod="kube-system/kube-apiserver-localhost" Jan 27 12:53:27.440225 kubelet[2386]: I0127 12:53:27.440125 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9399c91434089834b56835770d5faa10-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9399c91434089834b56835770d5faa10\") " pod="kube-system/kube-apiserver-localhost" Jan 27 12:53:27.440225 kubelet[2386]: I0127 12:53:27.440140 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 27 12:53:27.440225 kubelet[2386]: I0127 12:53:27.440152 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 27 12:53:27.440225 kubelet[2386]: I0127 12:53:27.440187 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 27 12:53:27.440386 kubelet[2386]: I0127 12:53:27.440202 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 27 12:53:27.440386 kubelet[2386]: I0127 12:53:27.440353 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 27 12:53:27.440386 kubelet[2386]: I0127 12:53:27.440373 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9399c91434089834b56835770d5faa10-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9399c91434089834b56835770d5faa10\") " pod="kube-system/kube-apiserver-localhost" Jan 27 12:53:27.440386 kubelet[2386]: I0127 12:53:27.440386 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 27 12:53:27.441828 kubelet[2386]: E0127 12:53:27.441793 2386 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 27 12:53:27.442282 kubelet[2386]: E0127 12:53:27.442215 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="400ms" Jan 27 12:53:27.623573 kubelet[2386]: I0127 12:53:27.623398 2386 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 27 12:53:27.624573 kubelet[2386]: E0127 12:53:27.624040 2386 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Jan 27 12:53:27.717388 kubelet[2386]: E0127 12:53:27.717240 2386 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:27.718611 containerd[1598]: time="2026-01-27T12:53:27.718446048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9399c91434089834b56835770d5faa10,Namespace:kube-system,Attempt:0,}" Jan 27 12:53:27.739253 kubelet[2386]: E0127 12:53:27.739179 2386 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:27.739874 containerd[1598]: time="2026-01-27T12:53:27.739805668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 27 12:53:27.745696 kubelet[2386]: E0127 12:53:27.745604 2386 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:27.746374 containerd[1598]: time="2026-01-27T12:53:27.746195265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 27 12:53:27.843745 kubelet[2386]: E0127 12:53:27.843585 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="800ms" Jan 27 12:53:28.027056 kubelet[2386]: I0127 12:53:28.026768 2386 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 27 12:53:28.027402 kubelet[2386]: E0127 12:53:28.027333 2386 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Jan 27 12:53:28.133003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2293116491.mount: Deactivated successfully. Jan 27 12:53:28.133443 kubelet[2386]: E0127 12:53:28.133369 2386 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 27 12:53:28.140417 containerd[1598]: time="2026-01-27T12:53:28.140349999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 27 12:53:28.143080 containerd[1598]: time="2026-01-27T12:53:28.143015706Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 27 12:53:28.145987 containerd[1598]: time="2026-01-27T12:53:28.145806628Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 27 12:53:28.148255 containerd[1598]: time="2026-01-27T12:53:28.148116822Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 27 12:53:28.149400 containerd[1598]: time="2026-01-27T12:53:28.149322321Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 27 12:53:28.152059 containerd[1598]: time="2026-01-27T12:53:28.151829055Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 27 12:53:28.153212 containerd[1598]: time="2026-01-27T12:53:28.153186680Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 27 12:53:28.154593 containerd[1598]: time="2026-01-27T12:53:28.154501162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 27 12:53:28.155384 containerd[1598]: time="2026-01-27T12:53:28.155282059Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 433.398763ms" Jan 27 12:53:28.156869 containerd[1598]: time="2026-01-27T12:53:28.156809985Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 413.98135ms" Jan 27 12:53:28.159256 containerd[1598]: time="2026-01-27T12:53:28.159172223Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 409.950106ms" Jan 27 12:53:28.190023 containerd[1598]: time="2026-01-27T12:53:28.189881144Z" level=info msg="connecting to shim 6926c9942a8c5c4430e4f01ed5a7d3b2e23de3eb3be55043f79efe94d15c4921" address="unix:///run/containerd/s/a9688d0c952e6060ccd92b39539987d7e1b0d346ee260127cf302dfadf078ebd" namespace=k8s.io protocol=ttrpc version=3 Jan 27 12:53:28.197382 containerd[1598]: time="2026-01-27T12:53:28.197322328Z" level=info msg="connecting to shim 22f4948b4e3d3964f1ff885aa192e0a2f29f69973a2fe1c822092cb97abcd292" address="unix:///run/containerd/s/6f89d3ff7e4dac51912b215e20326ce58cdab762e5e93d92fd5f7004a1066cea" namespace=k8s.io protocol=ttrpc version=3 Jan 27 12:53:28.205702 containerd[1598]: time="2026-01-27T12:53:28.205577634Z" level=info msg="connecting to shim d5650d332efffa215b55d536d7981e712fbd4dd7c17cb08f15a030b1074396f2" address="unix:///run/containerd/s/71ea37c2061905b9fd8cece340752b005665f79ee87c1e775860bf750a0b4d61" namespace=k8s.io protocol=ttrpc version=3 Jan 27 12:53:28.233476 systemd[1]: Started cri-containerd-6926c9942a8c5c4430e4f01ed5a7d3b2e23de3eb3be55043f79efe94d15c4921.scope - libcontainer container 6926c9942a8c5c4430e4f01ed5a7d3b2e23de3eb3be55043f79efe94d15c4921. Jan 27 12:53:28.240984 systemd[1]: Started cri-containerd-22f4948b4e3d3964f1ff885aa192e0a2f29f69973a2fe1c822092cb97abcd292.scope - libcontainer container 22f4948b4e3d3964f1ff885aa192e0a2f29f69973a2fe1c822092cb97abcd292. Jan 27 12:53:28.243532 systemd[1]: Started cri-containerd-d5650d332efffa215b55d536d7981e712fbd4dd7c17cb08f15a030b1074396f2.scope - libcontainer container d5650d332efffa215b55d536d7981e712fbd4dd7c17cb08f15a030b1074396f2. Jan 27 12:53:28.254000 audit: BPF prog-id=83 op=LOAD Jan 27 12:53:28.255000 audit: BPF prog-id=84 op=LOAD Jan 27 12:53:28.255000 audit[2470]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2438 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639323663393934326138633563343433306534663031656435613764 Jan 27 12:53:28.255000 audit: BPF prog-id=84 op=UNLOAD Jan 27 12:53:28.255000 audit[2470]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2438 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639323663393934326138633563343433306534663031656435613764 Jan 27 12:53:28.255000 audit: BPF prog-id=85 op=LOAD Jan 27 12:53:28.255000 audit[2470]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2438 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639323663393934326138633563343433306534663031656435613764 Jan 27 12:53:28.255000 audit: BPF prog-id=86 op=LOAD Jan 27 12:53:28.255000 audit[2470]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2438 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639323663393934326138633563343433306534663031656435613764 Jan 27 12:53:28.256000 audit: BPF prog-id=86 op=UNLOAD Jan 27 12:53:28.256000 audit[2470]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2438 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639323663393934326138633563343433306534663031656435613764 Jan 27 12:53:28.256000 audit: BPF prog-id=85 op=UNLOAD Jan 27 12:53:28.256000 audit[2470]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2438 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639323663393934326138633563343433306534663031656435613764 Jan 27 12:53:28.256000 audit: BPF prog-id=87 op=LOAD Jan 27 12:53:28.256000 audit[2470]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2438 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639323663393934326138633563343433306534663031656435613764 Jan 27 12:53:28.259000 audit: BPF prog-id=88 op=LOAD Jan 27 12:53:28.259000 audit: BPF prog-id=89 op=LOAD Jan 27 12:53:28.259000 audit[2508]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2478 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435363530643333326566666661323135623535643533366437393831 Jan 27 12:53:28.259000 audit: BPF prog-id=89 op=UNLOAD Jan 27 12:53:28.259000 audit[2508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2478 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435363530643333326566666661323135623535643533366437393831 Jan 27 12:53:28.259000 audit: BPF prog-id=90 op=LOAD Jan 27 12:53:28.259000 audit[2508]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2478 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435363530643333326566666661323135623535643533366437393831 Jan 27 12:53:28.260000 audit: BPF prog-id=91 op=LOAD Jan 27 12:53:28.260000 audit[2508]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2478 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435363530643333326566666661323135623535643533366437393831 Jan 27 12:53:28.260000 audit: BPF prog-id=91 op=UNLOAD Jan 27 12:53:28.260000 audit[2508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2478 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435363530643333326566666661323135623535643533366437393831 Jan 27 12:53:28.260000 audit: BPF prog-id=90 op=UNLOAD Jan 27 12:53:28.260000 audit[2508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2478 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435363530643333326566666661323135623535643533366437393831 Jan 27 12:53:28.260000 audit: BPF prog-id=92 op=LOAD Jan 27 12:53:28.260000 audit[2508]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2478 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435363530643333326566666661323135623535643533366437393831 Jan 27 12:53:28.261000 audit: BPF prog-id=93 op=LOAD Jan 27 12:53:28.261000 audit: BPF prog-id=94 op=LOAD Jan 27 12:53:28.261000 audit[2484]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2453 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232663439343862346533643339363466316666383835616131393265 Jan 27 12:53:28.262000 audit: BPF prog-id=94 op=UNLOAD Jan 27 12:53:28.262000 audit[2484]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2453 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232663439343862346533643339363466316666383835616131393265 Jan 27 12:53:28.262000 audit: BPF prog-id=95 op=LOAD Jan 27 12:53:28.262000 audit[2484]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2453 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232663439343862346533643339363466316666383835616131393265 Jan 27 12:53:28.262000 audit: BPF prog-id=96 op=LOAD Jan 27 12:53:28.262000 audit[2484]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2453 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232663439343862346533643339363466316666383835616131393265 Jan 27 12:53:28.262000 audit: BPF prog-id=96 op=UNLOAD Jan 27 12:53:28.262000 audit[2484]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2453 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232663439343862346533643339363466316666383835616131393265 Jan 27 12:53:28.262000 audit: BPF prog-id=95 op=UNLOAD Jan 27 12:53:28.262000 audit[2484]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2453 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232663439343862346533643339363466316666383835616131393265 Jan 27 12:53:28.262000 audit: BPF prog-id=97 op=LOAD Jan 27 12:53:28.262000 audit[2484]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2453 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232663439343862346533643339363466316666383835616131393265 Jan 27 12:53:28.306204 containerd[1598]: time="2026-01-27T12:53:28.306018814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5650d332efffa215b55d536d7981e712fbd4dd7c17cb08f15a030b1074396f2\"" Jan 27 12:53:28.310312 kubelet[2386]: E0127 12:53:28.308825 2386 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:28.320629 containerd[1598]: time="2026-01-27T12:53:28.320550098Z" level=info msg="CreateContainer within sandbox \"d5650d332efffa215b55d536d7981e712fbd4dd7c17cb08f15a030b1074396f2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 27 12:53:28.337472 containerd[1598]: time="2026-01-27T12:53:28.337439757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"22f4948b4e3d3964f1ff885aa192e0a2f29f69973a2fe1c822092cb97abcd292\"" Jan 27 12:53:28.338782 kubelet[2386]: E0127 12:53:28.338630 2386 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:28.342479 kubelet[2386]: E0127 12:53:28.342418 2386 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 27 12:53:28.343799 containerd[1598]: time="2026-01-27T12:53:28.343631523Z" level=info msg="CreateContainer within sandbox \"22f4948b4e3d3964f1ff885aa192e0a2f29f69973a2fe1c822092cb97abcd292\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 27 12:53:28.344333 containerd[1598]: time="2026-01-27T12:53:28.344284963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9399c91434089834b56835770d5faa10,Namespace:kube-system,Attempt:0,} returns sandbox id \"6926c9942a8c5c4430e4f01ed5a7d3b2e23de3eb3be55043f79efe94d15c4921\"" Jan 27 12:53:28.345189 kubelet[2386]: E0127 12:53:28.345150 2386 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:28.349960 containerd[1598]: time="2026-01-27T12:53:28.349879330Z" level=info msg="CreateContainer within sandbox \"6926c9942a8c5c4430e4f01ed5a7d3b2e23de3eb3be55043f79efe94d15c4921\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 27 12:53:28.353972 containerd[1598]: time="2026-01-27T12:53:28.353476224Z" level=info msg="Container b1fb0b518dc3fc54db06d13fe63f46b2acb5af95e8fb868e2ef46c9a06e157e7: CDI devices from CRI Config.CDIDevices: []" Jan 27 12:53:28.360019 containerd[1598]: time="2026-01-27T12:53:28.359970849Z" level=info msg="Container 5d0d56dd85b1a233b4b50e7c218209198034d17e1a09056e2e9179e632c26004: CDI devices from CRI Config.CDIDevices: []" Jan 27 12:53:28.365021 containerd[1598]: time="2026-01-27T12:53:28.364851738Z" level=info msg="CreateContainer within sandbox \"d5650d332efffa215b55d536d7981e712fbd4dd7c17cb08f15a030b1074396f2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b1fb0b518dc3fc54db06d13fe63f46b2acb5af95e8fb868e2ef46c9a06e157e7\"" Jan 27 12:53:28.366564 containerd[1598]: time="2026-01-27T12:53:28.366544700Z" level=info msg="StartContainer for \"b1fb0b518dc3fc54db06d13fe63f46b2acb5af95e8fb868e2ef46c9a06e157e7\"" Jan 27 12:53:28.367113 containerd[1598]: time="2026-01-27T12:53:28.366594989Z" level=info msg="Container 4c5d12cb1e41333671c7a3ba881bbd6555740f09e33517cf3922885caa0224e6: CDI devices from CRI Config.CDIDevices: []" Jan 27 12:53:28.369410 containerd[1598]: time="2026-01-27T12:53:28.369386326Z" level=info msg="connecting to shim b1fb0b518dc3fc54db06d13fe63f46b2acb5af95e8fb868e2ef46c9a06e157e7" address="unix:///run/containerd/s/71ea37c2061905b9fd8cece340752b005665f79ee87c1e775860bf750a0b4d61" protocol=ttrpc version=3 Jan 27 12:53:28.375136 containerd[1598]: time="2026-01-27T12:53:28.375021957Z" level=info msg="CreateContainer within sandbox \"22f4948b4e3d3964f1ff885aa192e0a2f29f69973a2fe1c822092cb97abcd292\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5d0d56dd85b1a233b4b50e7c218209198034d17e1a09056e2e9179e632c26004\"" Jan 27 12:53:28.376028 containerd[1598]: time="2026-01-27T12:53:28.375786795Z" level=info msg="StartContainer for \"5d0d56dd85b1a233b4b50e7c218209198034d17e1a09056e2e9179e632c26004\"" Jan 27 12:53:28.378160 containerd[1598]: time="2026-01-27T12:53:28.378135221Z" level=info msg="connecting to shim 5d0d56dd85b1a233b4b50e7c218209198034d17e1a09056e2e9179e632c26004" address="unix:///run/containerd/s/6f89d3ff7e4dac51912b215e20326ce58cdab762e5e93d92fd5f7004a1066cea" protocol=ttrpc version=3 Jan 27 12:53:28.380753 containerd[1598]: time="2026-01-27T12:53:28.380681656Z" level=info msg="CreateContainer within sandbox \"6926c9942a8c5c4430e4f01ed5a7d3b2e23de3eb3be55043f79efe94d15c4921\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4c5d12cb1e41333671c7a3ba881bbd6555740f09e33517cf3922885caa0224e6\"" Jan 27 12:53:28.381372 containerd[1598]: time="2026-01-27T12:53:28.381349703Z" level=info msg="StartContainer for \"4c5d12cb1e41333671c7a3ba881bbd6555740f09e33517cf3922885caa0224e6\"" Jan 27 12:53:28.382447 containerd[1598]: time="2026-01-27T12:53:28.382425312Z" level=info msg="connecting to shim 4c5d12cb1e41333671c7a3ba881bbd6555740f09e33517cf3922885caa0224e6" address="unix:///run/containerd/s/a9688d0c952e6060ccd92b39539987d7e1b0d346ee260127cf302dfadf078ebd" protocol=ttrpc version=3 Jan 27 12:53:28.403251 systemd[1]: Started cri-containerd-b1fb0b518dc3fc54db06d13fe63f46b2acb5af95e8fb868e2ef46c9a06e157e7.scope - libcontainer container b1fb0b518dc3fc54db06d13fe63f46b2acb5af95e8fb868e2ef46c9a06e157e7. Jan 27 12:53:28.413335 systemd[1]: Started cri-containerd-5d0d56dd85b1a233b4b50e7c218209198034d17e1a09056e2e9179e632c26004.scope - libcontainer container 5d0d56dd85b1a233b4b50e7c218209198034d17e1a09056e2e9179e632c26004. Jan 27 12:53:28.424318 systemd[1]: Started cri-containerd-4c5d12cb1e41333671c7a3ba881bbd6555740f09e33517cf3922885caa0224e6.scope - libcontainer container 4c5d12cb1e41333671c7a3ba881bbd6555740f09e33517cf3922885caa0224e6. Jan 27 12:53:28.426235 kubelet[2386]: E0127 12:53:28.426189 2386 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 27 12:53:28.429000 audit: BPF prog-id=98 op=LOAD Jan 27 12:53:28.430000 audit: BPF prog-id=99 op=LOAD Jan 27 12:53:28.430000 audit[2568]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2478 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231666230623531386463336663353464623036643133666536336634 Jan 27 12:53:28.430000 audit: BPF prog-id=99 op=UNLOAD Jan 27 12:53:28.430000 audit[2568]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2478 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231666230623531386463336663353464623036643133666536336634 Jan 27 12:53:28.430000 audit: BPF prog-id=100 op=LOAD Jan 27 12:53:28.430000 audit[2568]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2478 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231666230623531386463336663353464623036643133666536336634 Jan 27 12:53:28.430000 audit: BPF prog-id=101 op=LOAD Jan 27 12:53:28.430000 audit[2568]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2478 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231666230623531386463336663353464623036643133666536336634 Jan 27 12:53:28.430000 audit: BPF prog-id=101 op=UNLOAD Jan 27 12:53:28.430000 audit[2568]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2478 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231666230623531386463336663353464623036643133666536336634 Jan 27 12:53:28.431000 audit: BPF prog-id=100 op=UNLOAD Jan 27 12:53:28.431000 audit[2568]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2478 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.431000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231666230623531386463336663353464623036643133666536336634 Jan 27 12:53:28.431000 audit: BPF prog-id=102 op=LOAD Jan 27 12:53:28.431000 audit[2568]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2478 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.431000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231666230623531386463336663353464623036643133666536336634 Jan 27 12:53:28.434000 audit: BPF prog-id=103 op=LOAD Jan 27 12:53:28.435000 audit: BPF prog-id=104 op=LOAD Jan 27 12:53:28.435000 audit[2574]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2453 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564306435366464383562316132333362346235306537633231383230 Jan 27 12:53:28.435000 audit: BPF prog-id=104 op=UNLOAD Jan 27 12:53:28.435000 audit[2574]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2453 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564306435366464383562316132333362346235306537633231383230 Jan 27 12:53:28.435000 audit: BPF prog-id=105 op=LOAD Jan 27 12:53:28.435000 audit[2574]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2453 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564306435366464383562316132333362346235306537633231383230 Jan 27 12:53:28.435000 audit: BPF prog-id=106 op=LOAD Jan 27 12:53:28.435000 audit[2574]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2453 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564306435366464383562316132333362346235306537633231383230 Jan 27 12:53:28.435000 audit: BPF prog-id=106 op=UNLOAD Jan 27 12:53:28.435000 audit[2574]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2453 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564306435366464383562316132333362346235306537633231383230 Jan 27 12:53:28.435000 audit: BPF prog-id=105 op=UNLOAD Jan 27 12:53:28.435000 audit[2574]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2453 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564306435366464383562316132333362346235306537633231383230 Jan 27 12:53:28.435000 audit: BPF prog-id=107 op=LOAD Jan 27 12:53:28.435000 audit[2574]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2453 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564306435366464383562316132333362346235306537633231383230 Jan 27 12:53:28.449000 audit: BPF prog-id=108 op=LOAD Jan 27 12:53:28.449000 audit: BPF prog-id=109 op=LOAD Jan 27 12:53:28.449000 audit[2578]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2438 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463356431326362316534313333333637316337613362613838316262 Jan 27 12:53:28.449000 audit: BPF prog-id=109 op=UNLOAD Jan 27 12:53:28.449000 audit[2578]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2438 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463356431326362316534313333333637316337613362613838316262 Jan 27 12:53:28.450000 audit: BPF prog-id=110 op=LOAD Jan 27 12:53:28.450000 audit[2578]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2438 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463356431326362316534313333333637316337613362613838316262 Jan 27 12:53:28.450000 audit: BPF prog-id=111 op=LOAD Jan 27 12:53:28.450000 audit[2578]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2438 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463356431326362316534313333333637316337613362613838316262 Jan 27 12:53:28.450000 audit: BPF prog-id=111 op=UNLOAD Jan 27 12:53:28.450000 audit[2578]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2438 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463356431326362316534313333333637316337613362613838316262 Jan 27 12:53:28.450000 audit: BPF prog-id=110 op=UNLOAD Jan 27 12:53:28.450000 audit[2578]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2438 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463356431326362316534313333333637316337613362613838316262 Jan 27 12:53:28.450000 audit: BPF prog-id=112 op=LOAD Jan 27 12:53:28.450000 audit[2578]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2438 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:28.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463356431326362316534313333333637316337613362613838316262 Jan 27 12:53:28.487094 containerd[1598]: time="2026-01-27T12:53:28.486952410Z" level=info msg="StartContainer for \"5d0d56dd85b1a233b4b50e7c218209198034d17e1a09056e2e9179e632c26004\" returns successfully" Jan 27 12:53:28.519331 containerd[1598]: time="2026-01-27T12:53:28.519191091Z" level=info msg="StartContainer for \"b1fb0b518dc3fc54db06d13fe63f46b2acb5af95e8fb868e2ef46c9a06e157e7\" returns successfully" Jan 27 12:53:28.533694 containerd[1598]: time="2026-01-27T12:53:28.533553551Z" level=info msg="StartContainer for \"4c5d12cb1e41333671c7a3ba881bbd6555740f09e33517cf3922885caa0224e6\" returns successfully" Jan 27 12:53:28.830887 kubelet[2386]: I0127 12:53:28.830803 2386 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 27 12:53:29.303775 kubelet[2386]: E0127 12:53:29.303264 2386 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 27 12:53:29.303775 kubelet[2386]: E0127 12:53:29.303456 2386 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:29.307501 kubelet[2386]: E0127 12:53:29.307302 2386 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 27 12:53:29.307501 kubelet[2386]: E0127 12:53:29.307431 2386 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:29.312808 kubelet[2386]: E0127 12:53:29.312783 2386 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 27 12:53:29.313146 kubelet[2386]: E0127 12:53:29.313128 2386 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:29.740866 kubelet[2386]: E0127 12:53:29.740666 2386 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 27 12:53:29.813781 kubelet[2386]: I0127 12:53:29.813641 2386 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 27 12:53:29.813781 kubelet[2386]: E0127 12:53:29.813757 2386 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 27 12:53:29.829612 kubelet[2386]: E0127 12:53:29.829489 2386 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 27 12:53:29.930142 kubelet[2386]: E0127 12:53:29.930035 2386 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 27 12:53:30.030537 kubelet[2386]: E0127 12:53:30.030329 2386 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 27 12:53:30.131500 kubelet[2386]: E0127 12:53:30.131359 2386 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 27 12:53:30.231682 kubelet[2386]: E0127 12:53:30.231572 2386 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 27 12:53:30.314979 kubelet[2386]: E0127 12:53:30.314689 2386 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 27 12:53:30.314979 kubelet[2386]: E0127 12:53:30.314833 2386 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 27 12:53:30.314979 kubelet[2386]: E0127 12:53:30.314879 2386 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:30.315454 kubelet[2386]: E0127 12:53:30.315023 2386 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:30.332246 kubelet[2386]: E0127 12:53:30.332153 2386 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 27 12:53:30.441987 kubelet[2386]: I0127 12:53:30.441823 2386 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 27 12:53:30.449197 kubelet[2386]: E0127 12:53:30.449114 2386 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 27 12:53:30.449197 kubelet[2386]: I0127 12:53:30.449181 2386 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 27 12:53:30.450746 kubelet[2386]: E0127 12:53:30.450641 2386 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 27 12:53:30.450746 kubelet[2386]: I0127 12:53:30.450685 2386 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 27 12:53:30.452634 kubelet[2386]: E0127 12:53:30.452585 2386 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 27 12:53:30.760565 kubelet[2386]: I0127 12:53:30.760402 2386 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 27 12:53:30.767487 kubelet[2386]: E0127 12:53:30.767459 2386 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:31.229779 kubelet[2386]: I0127 12:53:31.229542 2386 apiserver.go:52] "Watching apiserver" Jan 27 12:53:31.240232 kubelet[2386]: I0127 12:53:31.240151 2386 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 12:53:31.315678 kubelet[2386]: E0127 12:53:31.315517 2386 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:32.123401 systemd[1]: Reload requested from client PID 2677 ('systemctl') (unit session-8.scope)... Jan 27 12:53:32.123462 systemd[1]: Reloading... Jan 27 12:53:32.220089 zram_generator::config[2728]: No configuration found. Jan 27 12:53:32.491872 systemd[1]: Reloading finished in 367 ms. Jan 27 12:53:32.536855 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 12:53:32.557228 systemd[1]: kubelet.service: Deactivated successfully. Jan 27 12:53:32.557603 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 12:53:32.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:32.557702 systemd[1]: kubelet.service: Consumed 1.116s CPU time, 126.5M memory peak. Jan 27 12:53:32.560457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 12:53:32.560699 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 27 12:53:32.560791 kernel: audit: type=1131 audit(1769518412.556:391): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:32.559000 audit: BPF prog-id=113 op=LOAD Jan 27 12:53:32.573830 kernel: audit: type=1334 audit(1769518412.559:392): prog-id=113 op=LOAD Jan 27 12:53:32.573872 kernel: audit: type=1334 audit(1769518412.559:393): prog-id=63 op=UNLOAD Jan 27 12:53:32.559000 audit: BPF prog-id=63 op=UNLOAD Jan 27 12:53:32.576695 kernel: audit: type=1334 audit(1769518412.559:394): prog-id=114 op=LOAD Jan 27 12:53:32.559000 audit: BPF prog-id=114 op=LOAD Jan 27 12:53:32.579521 kernel: audit: type=1334 audit(1769518412.559:395): prog-id=115 op=LOAD Jan 27 12:53:32.559000 audit: BPF prog-id=115 op=LOAD Jan 27 12:53:32.582215 kernel: audit: type=1334 audit(1769518412.559:396): prog-id=64 op=UNLOAD Jan 27 12:53:32.559000 audit: BPF prog-id=64 op=UNLOAD Jan 27 12:53:32.585110 kernel: audit: type=1334 audit(1769518412.559:397): prog-id=65 op=UNLOAD Jan 27 12:53:32.559000 audit: BPF prog-id=65 op=UNLOAD Jan 27 12:53:32.559000 audit: BPF prog-id=116 op=LOAD Jan 27 12:53:32.590578 kernel: audit: type=1334 audit(1769518412.559:398): prog-id=116 op=LOAD Jan 27 12:53:32.590632 kernel: audit: type=1334 audit(1769518412.559:399): prog-id=69 op=UNLOAD Jan 27 12:53:32.559000 audit: BPF prog-id=69 op=UNLOAD Jan 27 12:53:32.593151 kernel: audit: type=1334 audit(1769518412.563:400): prog-id=117 op=LOAD Jan 27 12:53:32.563000 audit: BPF prog-id=117 op=LOAD Jan 27 12:53:32.563000 audit: BPF prog-id=71 op=UNLOAD Jan 27 12:53:32.563000 audit: BPF prog-id=118 op=LOAD Jan 27 12:53:32.563000 audit: BPF prog-id=119 op=LOAD Jan 27 12:53:32.563000 audit: BPF prog-id=72 op=UNLOAD Jan 27 12:53:32.563000 audit: BPF prog-id=73 op=UNLOAD Jan 27 12:53:32.563000 audit: BPF prog-id=120 op=LOAD Jan 27 12:53:32.563000 audit: BPF prog-id=121 op=LOAD Jan 27 12:53:32.563000 audit: BPF prog-id=74 op=UNLOAD Jan 27 12:53:32.563000 audit: BPF prog-id=75 op=UNLOAD Jan 27 12:53:32.563000 audit: BPF prog-id=122 op=LOAD Jan 27 12:53:32.563000 audit: BPF prog-id=76 op=UNLOAD Jan 27 12:53:32.563000 audit: BPF prog-id=123 op=LOAD Jan 27 12:53:32.563000 audit: BPF prog-id=124 op=LOAD Jan 27 12:53:32.563000 audit: BPF prog-id=77 op=UNLOAD Jan 27 12:53:32.563000 audit: BPF prog-id=78 op=UNLOAD Jan 27 12:53:32.563000 audit: BPF prog-id=125 op=LOAD Jan 27 12:53:32.563000 audit: BPF prog-id=82 op=UNLOAD Jan 27 12:53:32.570000 audit: BPF prog-id=126 op=LOAD Jan 27 12:53:32.570000 audit: BPF prog-id=79 op=UNLOAD Jan 27 12:53:32.570000 audit: BPF prog-id=127 op=LOAD Jan 27 12:53:32.570000 audit: BPF prog-id=128 op=LOAD Jan 27 12:53:32.570000 audit: BPF prog-id=80 op=UNLOAD Jan 27 12:53:32.570000 audit: BPF prog-id=81 op=UNLOAD Jan 27 12:53:32.570000 audit: BPF prog-id=129 op=LOAD Jan 27 12:53:32.609000 audit: BPF prog-id=66 op=UNLOAD Jan 27 12:53:32.609000 audit: BPF prog-id=130 op=LOAD Jan 27 12:53:32.609000 audit: BPF prog-id=131 op=LOAD Jan 27 12:53:32.609000 audit: BPF prog-id=67 op=UNLOAD Jan 27 12:53:32.609000 audit: BPF prog-id=68 op=UNLOAD Jan 27 12:53:32.611000 audit: BPF prog-id=132 op=LOAD Jan 27 12:53:32.611000 audit: BPF prog-id=70 op=UNLOAD Jan 27 12:53:32.811068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 12:53:32.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:32.820875 (kubelet)[2768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 27 12:53:32.889131 kubelet[2768]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 27 12:53:32.889131 kubelet[2768]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 12:53:32.889478 kubelet[2768]: I0127 12:53:32.889141 2768 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 12:53:32.899320 kubelet[2768]: I0127 12:53:32.899240 2768 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 27 12:53:32.899320 kubelet[2768]: I0127 12:53:32.899289 2768 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 12:53:32.899320 kubelet[2768]: I0127 12:53:32.899323 2768 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 27 12:53:32.899487 kubelet[2768]: I0127 12:53:32.899338 2768 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 27 12:53:32.899695 kubelet[2768]: I0127 12:53:32.899551 2768 server.go:956] "Client rotation is on, will bootstrap in background" Jan 27 12:53:32.900971 kubelet[2768]: I0127 12:53:32.900782 2768 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 27 12:53:32.903098 kubelet[2768]: I0127 12:53:32.903003 2768 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 27 12:53:32.909302 kubelet[2768]: I0127 12:53:32.909068 2768 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 12:53:32.918479 kubelet[2768]: I0127 12:53:32.918410 2768 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 27 12:53:32.918948 kubelet[2768]: I0127 12:53:32.918794 2768 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 12:53:32.919195 kubelet[2768]: I0127 12:53:32.918875 2768 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 12:53:32.919351 kubelet[2768]: I0127 12:53:32.919205 2768 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 12:53:32.919351 kubelet[2768]: I0127 12:53:32.919221 2768 container_manager_linux.go:306] "Creating device plugin manager" Jan 27 12:53:32.919351 kubelet[2768]: I0127 12:53:32.919250 2768 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 27 12:53:32.920142 kubelet[2768]: I0127 12:53:32.920036 2768 state_mem.go:36] "Initialized new in-memory state store" Jan 27 12:53:32.920388 kubelet[2768]: I0127 12:53:32.920318 2768 kubelet.go:475] "Attempting to sync node with API server" Jan 27 12:53:32.920388 kubelet[2768]: I0127 12:53:32.920352 2768 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 12:53:32.920388 kubelet[2768]: I0127 12:53:32.920372 2768 kubelet.go:387] "Adding apiserver pod source" Jan 27 12:53:32.920498 kubelet[2768]: I0127 12:53:32.920418 2768 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 12:53:32.925860 kubelet[2768]: I0127 12:53:32.924999 2768 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 27 12:53:32.927162 kubelet[2768]: I0127 12:53:32.926673 2768 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 27 12:53:32.927162 kubelet[2768]: I0127 12:53:32.926796 2768 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 27 12:53:32.934209 kubelet[2768]: I0127 12:53:32.934119 2768 server.go:1262] "Started kubelet" Jan 27 12:53:32.935265 kubelet[2768]: I0127 12:53:32.935189 2768 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 12:53:32.936513 kubelet[2768]: I0127 12:53:32.936473 2768 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 12:53:32.936606 kubelet[2768]: I0127 12:53:32.936550 2768 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 27 12:53:32.937771 kubelet[2768]: I0127 12:53:32.937672 2768 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 12:53:32.945184 kubelet[2768]: I0127 12:53:32.945059 2768 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 12:53:32.947881 kubelet[2768]: I0127 12:53:32.947797 2768 server.go:310] "Adding debug handlers to kubelet server" Jan 27 12:53:32.951034 kubelet[2768]: E0127 12:53:32.947800 2768 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 27 12:53:32.951661 kubelet[2768]: I0127 12:53:32.951588 2768 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 27 12:53:32.955563 kubelet[2768]: I0127 12:53:32.955520 2768 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 27 12:53:32.956514 kubelet[2768]: I0127 12:53:32.956090 2768 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 12:53:32.956514 kubelet[2768]: I0127 12:53:32.956223 2768 reconciler.go:29] "Reconciler: start to sync state" Jan 27 12:53:32.956650 kubelet[2768]: I0127 12:53:32.956584 2768 factory.go:223] Registration of the systemd container factory successfully Jan 27 12:53:32.956775 kubelet[2768]: I0127 12:53:32.956688 2768 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 27 12:53:32.961106 kubelet[2768]: I0127 12:53:32.961006 2768 factory.go:223] Registration of the containerd container factory successfully Jan 27 12:53:32.972082 kubelet[2768]: I0127 12:53:32.971949 2768 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 27 12:53:32.985964 kubelet[2768]: I0127 12:53:32.985586 2768 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 27 12:53:32.985964 kubelet[2768]: I0127 12:53:32.985624 2768 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 27 12:53:32.985964 kubelet[2768]: I0127 12:53:32.985643 2768 kubelet.go:2427] "Starting kubelet main sync loop" Jan 27 12:53:32.985964 kubelet[2768]: E0127 12:53:32.985685 2768 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 12:53:33.018313 kubelet[2768]: I0127 12:53:33.018238 2768 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 27 12:53:33.018313 kubelet[2768]: I0127 12:53:33.018283 2768 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 27 12:53:33.018313 kubelet[2768]: I0127 12:53:33.018302 2768 state_mem.go:36] "Initialized new in-memory state store" Jan 27 12:53:33.018470 kubelet[2768]: I0127 12:53:33.018410 2768 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 27 12:53:33.018470 kubelet[2768]: I0127 12:53:33.018419 2768 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 27 12:53:33.018470 kubelet[2768]: I0127 12:53:33.018434 2768 policy_none.go:49] "None policy: Start" Jan 27 12:53:33.018470 kubelet[2768]: I0127 12:53:33.018443 2768 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 27 12:53:33.018470 kubelet[2768]: I0127 12:53:33.018452 2768 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 27 12:53:33.018576 kubelet[2768]: I0127 12:53:33.018525 2768 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 27 12:53:33.018576 kubelet[2768]: I0127 12:53:33.018532 2768 policy_none.go:47] "Start" Jan 27 12:53:33.024817 kubelet[2768]: E0127 12:53:33.024762 2768 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 27 12:53:33.025067 kubelet[2768]: I0127 12:53:33.024995 2768 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 12:53:33.025067 kubelet[2768]: I0127 12:53:33.025020 2768 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 12:53:33.025312 kubelet[2768]: I0127 12:53:33.025198 2768 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 12:53:33.026505 kubelet[2768]: E0127 12:53:33.026394 2768 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 27 12:53:33.087597 kubelet[2768]: I0127 12:53:33.087348 2768 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 27 12:53:33.087597 kubelet[2768]: I0127 12:53:33.087529 2768 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 27 12:53:33.087784 kubelet[2768]: I0127 12:53:33.087627 2768 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 27 12:53:33.096883 kubelet[2768]: E0127 12:53:33.096809 2768 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 27 12:53:33.135957 kubelet[2768]: I0127 12:53:33.135796 2768 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 27 12:53:33.147107 kubelet[2768]: I0127 12:53:33.147041 2768 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 27 12:53:33.147374 kubelet[2768]: I0127 12:53:33.147131 2768 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 27 12:53:33.257612 kubelet[2768]: I0127 12:53:33.257545 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 27 12:53:33.257612 kubelet[2768]: I0127 12:53:33.257599 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 27 12:53:33.257612 kubelet[2768]: I0127 12:53:33.257623 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 27 12:53:33.257612 kubelet[2768]: I0127 12:53:33.257638 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 27 12:53:33.257612 kubelet[2768]: I0127 12:53:33.257651 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9399c91434089834b56835770d5faa10-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9399c91434089834b56835770d5faa10\") " pod="kube-system/kube-apiserver-localhost" Jan 27 12:53:33.257981 kubelet[2768]: I0127 12:53:33.257664 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9399c91434089834b56835770d5faa10-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9399c91434089834b56835770d5faa10\") " pod="kube-system/kube-apiserver-localhost" Jan 27 12:53:33.257981 kubelet[2768]: I0127 12:53:33.257678 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 27 12:53:33.257981 kubelet[2768]: I0127 12:53:33.257873 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 27 12:53:33.258058 kubelet[2768]: I0127 12:53:33.257990 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9399c91434089834b56835770d5faa10-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9399c91434089834b56835770d5faa10\") " pod="kube-system/kube-apiserver-localhost" Jan 27 12:53:33.397346 kubelet[2768]: E0127 12:53:33.397158 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:33.397623 kubelet[2768]: E0127 12:53:33.397529 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:33.398015 kubelet[2768]: E0127 12:53:33.397653 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:33.921565 kubelet[2768]: I0127 12:53:33.921453 2768 apiserver.go:52] "Watching apiserver" Jan 27 12:53:33.957168 kubelet[2768]: I0127 12:53:33.957018 2768 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 12:53:34.009858 kubelet[2768]: E0127 12:53:34.009711 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:34.010314 kubelet[2768]: I0127 12:53:34.010201 2768 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 27 12:53:34.011107 kubelet[2768]: E0127 12:53:34.011045 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:34.022275 kubelet[2768]: E0127 12:53:34.022144 2768 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 27 12:53:34.024203 kubelet[2768]: E0127 12:53:34.024174 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:34.057089 kubelet[2768]: I0127 12:53:34.056848 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.056830157 podStartE2EDuration="1.056830157s" podCreationTimestamp="2026-01-27 12:53:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:53:34.046210969 +0000 UTC m=+1.220121401" watchObservedRunningTime="2026-01-27 12:53:34.056830157 +0000 UTC m=+1.230740588" Jan 27 12:53:34.068482 kubelet[2768]: I0127 12:53:34.068303 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.068291068 podStartE2EDuration="4.068291068s" podCreationTimestamp="2026-01-27 12:53:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:53:34.068197495 +0000 UTC m=+1.242107926" watchObservedRunningTime="2026-01-27 12:53:34.068291068 +0000 UTC m=+1.242201499" Jan 27 12:53:34.068482 kubelet[2768]: I0127 12:53:34.068409 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.068403839 podStartE2EDuration="1.068403839s" podCreationTimestamp="2026-01-27 12:53:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:53:34.057370436 +0000 UTC m=+1.231280866" watchObservedRunningTime="2026-01-27 12:53:34.068403839 +0000 UTC m=+1.242314269" Jan 27 12:53:35.011635 kubelet[2768]: E0127 12:53:35.011330 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:35.011635 kubelet[2768]: E0127 12:53:35.011366 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:35.012701 kubelet[2768]: E0127 12:53:35.012523 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:36.013421 kubelet[2768]: E0127 12:53:36.013394 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:37.181957 kubelet[2768]: E0127 12:53:37.181816 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:38.537025 kubelet[2768]: I0127 12:53:38.536541 2768 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 27 12:53:38.537471 containerd[1598]: time="2026-01-27T12:53:38.537273901Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 27 12:53:38.537778 kubelet[2768]: I0127 12:53:38.537470 2768 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 27 12:53:39.549105 systemd[1]: Created slice kubepods-besteffort-pod786b358a_1c4e_4eaf_9a74_ad018c309e7a.slice - libcontainer container kubepods-besteffort-pod786b358a_1c4e_4eaf_9a74_ad018c309e7a.slice. Jan 27 12:53:39.699955 kubelet[2768]: I0127 12:53:39.699403 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/786b358a-1c4e-4eaf-9a74-ad018c309e7a-kube-proxy\") pod \"kube-proxy-dcr5j\" (UID: \"786b358a-1c4e-4eaf-9a74-ad018c309e7a\") " pod="kube-system/kube-proxy-dcr5j" Jan 27 12:53:39.699955 kubelet[2768]: I0127 12:53:39.699432 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f5jt\" (UniqueName: \"kubernetes.io/projected/786b358a-1c4e-4eaf-9a74-ad018c309e7a-kube-api-access-7f5jt\") pod \"kube-proxy-dcr5j\" (UID: \"786b358a-1c4e-4eaf-9a74-ad018c309e7a\") " pod="kube-system/kube-proxy-dcr5j" Jan 27 12:53:39.699955 kubelet[2768]: I0127 12:53:39.699452 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/786b358a-1c4e-4eaf-9a74-ad018c309e7a-xtables-lock\") pod \"kube-proxy-dcr5j\" (UID: \"786b358a-1c4e-4eaf-9a74-ad018c309e7a\") " pod="kube-system/kube-proxy-dcr5j" Jan 27 12:53:39.699955 kubelet[2768]: I0127 12:53:39.699464 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/786b358a-1c4e-4eaf-9a74-ad018c309e7a-lib-modules\") pod \"kube-proxy-dcr5j\" (UID: \"786b358a-1c4e-4eaf-9a74-ad018c309e7a\") " pod="kube-system/kube-proxy-dcr5j" Jan 27 12:53:39.712108 systemd[1]: Created slice kubepods-besteffort-podbbba5774_e7ca_4460_a9cc_8bcc0a99a0e5.slice - libcontainer container kubepods-besteffort-podbbba5774_e7ca_4460_a9cc_8bcc0a99a0e5.slice. Jan 27 12:53:39.799847 kubelet[2768]: I0127 12:53:39.799646 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csm9t\" (UniqueName: \"kubernetes.io/projected/bbba5774-e7ca-4460-a9cc-8bcc0a99a0e5-kube-api-access-csm9t\") pod \"tigera-operator-65cdcdfd6d-4pkwl\" (UID: \"bbba5774-e7ca-4460-a9cc-8bcc0a99a0e5\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-4pkwl" Jan 27 12:53:39.799847 kubelet[2768]: I0127 12:53:39.799832 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bbba5774-e7ca-4460-a9cc-8bcc0a99a0e5-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-4pkwl\" (UID: \"bbba5774-e7ca-4460-a9cc-8bcc0a99a0e5\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-4pkwl" Jan 27 12:53:39.864419 kubelet[2768]: E0127 12:53:39.864253 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:39.866413 containerd[1598]: time="2026-01-27T12:53:39.866205359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dcr5j,Uid:786b358a-1c4e-4eaf-9a74-ad018c309e7a,Namespace:kube-system,Attempt:0,}" Jan 27 12:53:39.923280 containerd[1598]: time="2026-01-27T12:53:39.923074585Z" level=info msg="connecting to shim 3362ae11a915cb1fbfe9c38ca7e851f2ddc4645086b2f2c8a26bd1871b78f50d" address="unix:///run/containerd/s/56d3edd5ec6c165979f69cc9b24188336cd81c45940ddd6fc53c50afb3976b7f" namespace=k8s.io protocol=ttrpc version=3 Jan 27 12:53:39.983168 systemd[1]: Started cri-containerd-3362ae11a915cb1fbfe9c38ca7e851f2ddc4645086b2f2c8a26bd1871b78f50d.scope - libcontainer container 3362ae11a915cb1fbfe9c38ca7e851f2ddc4645086b2f2c8a26bd1871b78f50d. Jan 27 12:53:39.999000 audit: BPF prog-id=133 op=LOAD Jan 27 12:53:40.005111 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 27 12:53:40.005195 kernel: audit: type=1334 audit(1769518419.999:433): prog-id=133 op=LOAD Jan 27 12:53:40.000000 audit: BPF prog-id=134 op=LOAD Jan 27 12:53:40.007966 kernel: audit: type=1334 audit(1769518420.000:434): prog-id=134 op=LOAD Jan 27 12:53:40.000000 audit[2845]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2833 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.018951 kernel: audit: type=1300 audit(1769518420.000:434): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2833 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333363261653131613931356362316662666539633338636137653835 Jan 27 12:53:40.031171 containerd[1598]: time="2026-01-27T12:53:40.024250587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-4pkwl,Uid:bbba5774-e7ca-4460-a9cc-8bcc0a99a0e5,Namespace:tigera-operator,Attempt:0,}" Jan 27 12:53:40.031366 kernel: audit: type=1327 audit(1769518420.000:434): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333363261653131613931356362316662666539633338636137653835 Jan 27 12:53:40.000000 audit: BPF prog-id=134 op=UNLOAD Jan 27 12:53:40.034939 kernel: audit: type=1334 audit(1769518420.000:435): prog-id=134 op=UNLOAD Jan 27 12:53:40.000000 audit[2845]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2833 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.046384 kernel: audit: type=1300 audit(1769518420.000:435): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2833 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333363261653131613931356362316662666539633338636137653835 Jan 27 12:53:40.057968 kernel: audit: type=1327 audit(1769518420.000:435): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333363261653131613931356362316662666539633338636137653835 Jan 27 12:53:40.000000 audit: BPF prog-id=135 op=LOAD Jan 27 12:53:40.060802 kernel: audit: type=1334 audit(1769518420.000:436): prog-id=135 op=LOAD Jan 27 12:53:40.000000 audit[2845]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2833 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.064266 containerd[1598]: time="2026-01-27T12:53:40.064178081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dcr5j,Uid:786b358a-1c4e-4eaf-9a74-ad018c309e7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3362ae11a915cb1fbfe9c38ca7e851f2ddc4645086b2f2c8a26bd1871b78f50d\"" Jan 27 12:53:40.066310 kubelet[2768]: E0127 12:53:40.066265 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:40.073156 kernel: audit: type=1300 audit(1769518420.000:436): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2833 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.073236 kernel: audit: type=1327 audit(1769518420.000:436): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333363261653131613931356362316662666539633338636137653835 Jan 27 12:53:40.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333363261653131613931356362316662666539633338636137653835 Jan 27 12:53:40.000000 audit: BPF prog-id=136 op=LOAD Jan 27 12:53:40.000000 audit[2845]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2833 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333363261653131613931356362316662666539633338636137653835 Jan 27 12:53:40.000000 audit: BPF prog-id=136 op=UNLOAD Jan 27 12:53:40.000000 audit[2845]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2833 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333363261653131613931356362316662666539633338636137653835 Jan 27 12:53:40.000000 audit: BPF prog-id=135 op=UNLOAD Jan 27 12:53:40.000000 audit[2845]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2833 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333363261653131613931356362316662666539633338636137653835 Jan 27 12:53:40.000000 audit: BPF prog-id=137 op=LOAD Jan 27 12:53:40.000000 audit[2845]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2833 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333363261653131613931356362316662666539633338636137653835 Jan 27 12:53:40.084770 containerd[1598]: time="2026-01-27T12:53:40.084693155Z" level=info msg="connecting to shim 79e577dc4b0dcb24f0a14eb432d2f7e5780d3eb6886ebf5b4ca87b982764f8d0" address="unix:///run/containerd/s/7aea3166aafa61f71561ea8a5445e64246d704719e666fceb5eb2c4a363915b2" namespace=k8s.io protocol=ttrpc version=3 Jan 27 12:53:40.108141 containerd[1598]: time="2026-01-27T12:53:40.108062467Z" level=info msg="CreateContainer within sandbox \"3362ae11a915cb1fbfe9c38ca7e851f2ddc4645086b2f2c8a26bd1871b78f50d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 27 12:53:40.122199 systemd[1]: Started cri-containerd-79e577dc4b0dcb24f0a14eb432d2f7e5780d3eb6886ebf5b4ca87b982764f8d0.scope - libcontainer container 79e577dc4b0dcb24f0a14eb432d2f7e5780d3eb6886ebf5b4ca87b982764f8d0. Jan 27 12:53:40.123669 containerd[1598]: time="2026-01-27T12:53:40.123317331Z" level=info msg="Container 41c0416b327313b5b4d2f059965011c30b18f8a0c6a0ee1261b5661e6f4cf9e4: CDI devices from CRI Config.CDIDevices: []" Jan 27 12:53:40.140000 containerd[1598]: time="2026-01-27T12:53:40.139888558Z" level=info msg="CreateContainer within sandbox \"3362ae11a915cb1fbfe9c38ca7e851f2ddc4645086b2f2c8a26bd1871b78f50d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"41c0416b327313b5b4d2f059965011c30b18f8a0c6a0ee1261b5661e6f4cf9e4\"" Jan 27 12:53:40.139000 audit: BPF prog-id=138 op=LOAD Jan 27 12:53:40.140964 containerd[1598]: time="2026-01-27T12:53:40.140822646Z" level=info msg="StartContainer for \"41c0416b327313b5b4d2f059965011c30b18f8a0c6a0ee1261b5661e6f4cf9e4\"" Jan 27 12:53:40.140000 audit: BPF prog-id=139 op=LOAD Jan 27 12:53:40.140000 audit[2892]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2880 pid=2892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.140000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739653537376463346230646362323466306131346562343332643266 Jan 27 12:53:40.140000 audit: BPF prog-id=139 op=UNLOAD Jan 27 12:53:40.140000 audit[2892]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2880 pid=2892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.140000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739653537376463346230646362323466306131346562343332643266 Jan 27 12:53:40.140000 audit: BPF prog-id=140 op=LOAD Jan 27 12:53:40.140000 audit[2892]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2880 pid=2892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.140000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739653537376463346230646362323466306131346562343332643266 Jan 27 12:53:40.140000 audit: BPF prog-id=141 op=LOAD Jan 27 12:53:40.140000 audit[2892]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2880 pid=2892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.140000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739653537376463346230646362323466306131346562343332643266 Jan 27 12:53:40.140000 audit: BPF prog-id=141 op=UNLOAD Jan 27 12:53:40.140000 audit[2892]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2880 pid=2892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.140000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739653537376463346230646362323466306131346562343332643266 Jan 27 12:53:40.140000 audit: BPF prog-id=140 op=UNLOAD Jan 27 12:53:40.140000 audit[2892]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2880 pid=2892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.140000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739653537376463346230646362323466306131346562343332643266 Jan 27 12:53:40.140000 audit: BPF prog-id=142 op=LOAD Jan 27 12:53:40.140000 audit[2892]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2880 pid=2892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.140000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739653537376463346230646362323466306131346562343332643266 Jan 27 12:53:40.143184 containerd[1598]: time="2026-01-27T12:53:40.142872009Z" level=info msg="connecting to shim 41c0416b327313b5b4d2f059965011c30b18f8a0c6a0ee1261b5661e6f4cf9e4" address="unix:///run/containerd/s/56d3edd5ec6c165979f69cc9b24188336cd81c45940ddd6fc53c50afb3976b7f" protocol=ttrpc version=3 Jan 27 12:53:40.173309 systemd[1]: Started cri-containerd-41c0416b327313b5b4d2f059965011c30b18f8a0c6a0ee1261b5661e6f4cf9e4.scope - libcontainer container 41c0416b327313b5b4d2f059965011c30b18f8a0c6a0ee1261b5661e6f4cf9e4. Jan 27 12:53:40.189189 containerd[1598]: time="2026-01-27T12:53:40.189131058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-4pkwl,Uid:bbba5774-e7ca-4460-a9cc-8bcc0a99a0e5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"79e577dc4b0dcb24f0a14eb432d2f7e5780d3eb6886ebf5b4ca87b982764f8d0\"" Jan 27 12:53:40.193841 containerd[1598]: time="2026-01-27T12:53:40.193544764Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 27 12:53:40.256000 audit: BPF prog-id=143 op=LOAD Jan 27 12:53:40.256000 audit[2912]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2833 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431633034313662333237333133623562346432663035393936353031 Jan 27 12:53:40.257000 audit: BPF prog-id=144 op=LOAD Jan 27 12:53:40.257000 audit[2912]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2833 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431633034313662333237333133623562346432663035393936353031 Jan 27 12:53:40.257000 audit: BPF prog-id=144 op=UNLOAD Jan 27 12:53:40.257000 audit[2912]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2833 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431633034313662333237333133623562346432663035393936353031 Jan 27 12:53:40.257000 audit: BPF prog-id=143 op=UNLOAD Jan 27 12:53:40.257000 audit[2912]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2833 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431633034313662333237333133623562346432663035393936353031 Jan 27 12:53:40.257000 audit: BPF prog-id=145 op=LOAD Jan 27 12:53:40.257000 audit[2912]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2833 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431633034313662333237333133623562346432663035393936353031 Jan 27 12:53:40.290358 containerd[1598]: time="2026-01-27T12:53:40.290310091Z" level=info msg="StartContainer for \"41c0416b327313b5b4d2f059965011c30b18f8a0c6a0ee1261b5661e6f4cf9e4\" returns successfully" Jan 27 12:53:40.597000 audit[2984]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=2984 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:40.597000 audit[2984]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe5fee5200 a2=0 a3=7ffe5fee51ec items=0 ppid=2925 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.597000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 27 12:53:40.601000 audit[2985]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=2985 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:40.601000 audit[2985]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffda436f5b0 a2=0 a3=7ffda436f59c items=0 ppid=2925 pid=2985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.601000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 27 12:53:40.610000 audit[2989]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=2989 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:40.610000 audit[2989]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd1c8c5ef0 a2=0 a3=7ffd1c8c5edc items=0 ppid=2925 pid=2989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.610000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 27 12:53:40.613000 audit[2990]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=2990 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:40.613000 audit[2990]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc30b5ded0 a2=0 a3=7ffc30b5debc items=0 ppid=2925 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.613000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 27 12:53:40.616000 audit[2992]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=2992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:40.616000 audit[2992]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff63a7ca70 a2=0 a3=7fff63a7ca5c items=0 ppid=2925 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.616000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 27 12:53:40.617000 audit[2993]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=2993 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:40.617000 audit[2993]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff99fab550 a2=0 a3=7fff99fab53c items=0 ppid=2925 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.617000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 27 12:53:40.710000 audit[2994]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=2994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:40.710000 audit[2994]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd141ef620 a2=0 a3=7ffd141ef60c items=0 ppid=2925 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.710000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 27 12:53:40.717000 audit[2996]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=2996 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:40.717000 audit[2996]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdc6482100 a2=0 a3=7ffdc64820ec items=0 ppid=2925 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.717000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Jan 27 12:53:40.726000 audit[2999]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=2999 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:40.726000 audit[2999]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffceb593e80 a2=0 a3=7ffceb593e6c items=0 ppid=2925 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.726000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 27 12:53:40.729000 audit[3000]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3000 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:40.729000 audit[3000]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe934862a0 a2=0 a3=7ffe9348628c items=0 ppid=2925 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.729000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 27 12:53:40.735000 audit[3002]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3002 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:40.735000 audit[3002]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe8c3bb720 a2=0 a3=7ffe8c3bb70c items=0 ppid=2925 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.735000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 27 12:53:40.738000 audit[3003]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:40.738000 audit[3003]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfd915f20 a2=0 a3=7ffdfd915f0c items=0 ppid=2925 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.738000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 27 12:53:40.745000 audit[3005]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:40.745000 audit[3005]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff093bb620 a2=0 a3=7fff093bb60c items=0 ppid=2925 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.745000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 27 12:53:40.754000 audit[3008]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3008 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:40.754000 audit[3008]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff9679f960 a2=0 a3=7fff9679f94c items=0 ppid=2925 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.754000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 27 12:53:40.757000 audit[3009]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:40.757000 audit[3009]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef43eaaf0 a2=0 a3=7ffef43eaadc items=0 ppid=2925 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.757000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 27 12:53:40.763000 audit[3011]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:40.763000 audit[3011]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffda8f4ca80 a2=0 a3=7ffda8f4ca6c items=0 ppid=2925 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.763000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 27 12:53:40.766000 audit[3012]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3012 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:40.766000 audit[3012]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd3d924ac0 a2=0 a3=7ffd3d924aac items=0 ppid=2925 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.766000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 27 12:53:40.772000 audit[3014]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3014 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:40.772000 audit[3014]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd2a908f60 a2=0 a3=7ffd2a908f4c items=0 ppid=2925 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.772000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Jan 27 12:53:40.780000 audit[3017]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:40.780000 audit[3017]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffffeb7cce0 a2=0 a3=7ffffeb7cccc items=0 ppid=2925 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.780000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 27 12:53:40.789000 audit[3020]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:40.789000 audit[3020]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcbaa416e0 a2=0 a3=7ffcbaa416cc items=0 ppid=2925 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.789000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 27 12:53:40.791000 audit[3021]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:40.791000 audit[3021]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe49affe10 a2=0 a3=7ffe49affdfc items=0 ppid=2925 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.791000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 27 12:53:40.796000 audit[3023]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:40.796000 audit[3023]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc663b0340 a2=0 a3=7ffc663b032c items=0 ppid=2925 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.796000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 27 12:53:40.803000 audit[3026]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:40.803000 audit[3026]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff1cb339c0 a2=0 a3=7fff1cb339ac items=0 ppid=2925 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.803000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 27 12:53:40.805000 audit[3027]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:40.805000 audit[3027]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc82137540 a2=0 a3=7ffc8213752c items=0 ppid=2925 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.805000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 27 12:53:40.811000 audit[3029]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 12:53:40.811000 audit[3029]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fffbdcd6860 a2=0 a3=7fffbdcd684c items=0 ppid=2925 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.811000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 27 12:53:40.848000 audit[3035]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3035 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:53:40.848000 audit[3035]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd14af2ed0 a2=0 a3=7ffd14af2ebc items=0 ppid=2925 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.848000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:53:40.859000 audit[3035]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3035 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:53:40.859000 audit[3035]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd14af2ed0 a2=0 a3=7ffd14af2ebc items=0 ppid=2925 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.859000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:53:40.862000 audit[3040]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3040 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:40.862000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffbb578030 a2=0 a3=7fffbb57801c items=0 ppid=2925 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.862000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 27 12:53:40.868000 audit[3042]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3042 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:40.868000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffebd921460 a2=0 a3=7ffebd92144c items=0 ppid=2925 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.868000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 27 12:53:40.878000 audit[3045]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3045 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:40.878000 audit[3045]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd44160480 a2=0 a3=7ffd4416046c items=0 ppid=2925 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.878000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Jan 27 12:53:40.881000 audit[3046]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3046 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:40.881000 audit[3046]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0b91b910 a2=0 a3=7fff0b91b8fc items=0 ppid=2925 pid=3046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.881000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 27 12:53:40.890000 audit[3050]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3050 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:40.890000 audit[3050]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc079d27d0 a2=0 a3=7ffc079d27bc items=0 ppid=2925 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.890000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 27 12:53:40.893000 audit[3053]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3053 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:40.893000 audit[3053]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0c6f97d0 a2=0 a3=7fff0c6f97bc items=0 ppid=2925 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.893000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 27 12:53:40.901000 audit[3055]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3055 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:40.901000 audit[3055]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc5c23b4c0 a2=0 a3=7ffc5c23b4ac items=0 ppid=2925 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.901000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 27 12:53:40.911000 audit[3058]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3058 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:40.911000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffdc4c0d2b0 a2=0 a3=7ffdc4c0d29c items=0 ppid=2925 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.911000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 27 12:53:40.915000 audit[3059]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3059 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:40.915000 audit[3059]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb1041380 a2=0 a3=7ffdb104136c items=0 ppid=2925 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.915000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 27 12:53:40.922000 audit[3061]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3061 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:40.922000 audit[3061]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd13ec31a0 a2=0 a3=7ffd13ec318c items=0 ppid=2925 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.922000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 27 12:53:40.927000 audit[3062]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3062 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:40.927000 audit[3062]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1d7c9ad0 a2=0 a3=7ffe1d7c9abc items=0 ppid=2925 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.927000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 27 12:53:40.935000 audit[3064]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3064 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:40.935000 audit[3064]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffbd80ae50 a2=0 a3=7fffbd80ae3c items=0 ppid=2925 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.935000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 27 12:53:40.940979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1133964453.mount: Deactivated successfully. Jan 27 12:53:40.946000 audit[3067]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3067 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:40.946000 audit[3067]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc7860f160 a2=0 a3=7ffc7860f14c items=0 ppid=2925 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.946000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 27 12:53:40.954000 audit[3070]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:40.954000 audit[3070]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe74b9fd00 a2=0 a3=7ffe74b9fcec items=0 ppid=2925 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.954000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Jan 27 12:53:40.956000 audit[3071]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3071 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:40.956000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff378744a0 a2=0 a3=7fff3787448c items=0 ppid=2925 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.956000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 27 12:53:40.963000 audit[3073]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:40.963000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffcb20cd350 a2=0 a3=7ffcb20cd33c items=0 ppid=2925 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.963000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 27 12:53:40.971000 audit[3076]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:40.971000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe9a110130 a2=0 a3=7ffe9a11011c items=0 ppid=2925 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.971000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 27 12:53:40.974000 audit[3077]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:40.974000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd25cd0510 a2=0 a3=7ffd25cd04fc items=0 ppid=2925 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.974000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 27 12:53:40.979000 audit[3079]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:40.979000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffcbdfc0ec0 a2=0 a3=7ffcbdfc0eac items=0 ppid=2925 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.979000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 27 12:53:40.986000 audit[3080]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:40.986000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffefe6c1010 a2=0 a3=7ffefe6c0ffc items=0 ppid=2925 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.986000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 27 12:53:40.993000 audit[3082]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:40.993000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc81bc4ec0 a2=0 a3=7ffc81bc4eac items=0 ppid=2925 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:40.993000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 27 12:53:41.001000 audit[3085]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 12:53:41.001000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff0b3271f0 a2=0 a3=7fff0b3271dc items=0 ppid=2925 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:41.001000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 27 12:53:41.007000 audit[3087]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 27 12:53:41.007000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffd47b46db0 a2=0 a3=7ffd47b46d9c items=0 ppid=2925 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:41.007000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:53:41.008000 audit[3087]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 27 12:53:41.008000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffd47b46db0 a2=0 a3=7ffd47b46d9c items=0 ppid=2925 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:41.008000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:53:41.036174 kubelet[2768]: E0127 12:53:41.035790 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:42.734569 containerd[1598]: time="2026-01-27T12:53:42.734425200Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:42.735647 containerd[1598]: time="2026-01-27T12:53:42.735603773Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 27 12:53:42.737108 containerd[1598]: time="2026-01-27T12:53:42.737010020Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:42.743112 containerd[1598]: time="2026-01-27T12:53:42.742888461Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:42.744687 containerd[1598]: time="2026-01-27T12:53:42.744540685Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.550964222s" Jan 27 12:53:42.744687 containerd[1598]: time="2026-01-27T12:53:42.744579056Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 27 12:53:42.750678 containerd[1598]: time="2026-01-27T12:53:42.750445984Z" level=info msg="CreateContainer within sandbox \"79e577dc4b0dcb24f0a14eb432d2f7e5780d3eb6886ebf5b4ca87b982764f8d0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 27 12:53:42.762775 containerd[1598]: time="2026-01-27T12:53:42.762428405Z" level=info msg="Container 3da131136f46286945100de40202c53d7e18178d7aae1e017f1b18c6665a07df: CDI devices from CRI Config.CDIDevices: []" Jan 27 12:53:42.771495 containerd[1598]: time="2026-01-27T12:53:42.771316531Z" level=info msg="CreateContainer within sandbox \"79e577dc4b0dcb24f0a14eb432d2f7e5780d3eb6886ebf5b4ca87b982764f8d0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3da131136f46286945100de40202c53d7e18178d7aae1e017f1b18c6665a07df\"" Jan 27 12:53:42.772879 containerd[1598]: time="2026-01-27T12:53:42.772380001Z" level=info msg="StartContainer for \"3da131136f46286945100de40202c53d7e18178d7aae1e017f1b18c6665a07df\"" Jan 27 12:53:42.773618 containerd[1598]: time="2026-01-27T12:53:42.773567361Z" level=info msg="connecting to shim 3da131136f46286945100de40202c53d7e18178d7aae1e017f1b18c6665a07df" address="unix:///run/containerd/s/7aea3166aafa61f71561ea8a5445e64246d704719e666fceb5eb2c4a363915b2" protocol=ttrpc version=3 Jan 27 12:53:42.803248 systemd[1]: Started cri-containerd-3da131136f46286945100de40202c53d7e18178d7aae1e017f1b18c6665a07df.scope - libcontainer container 3da131136f46286945100de40202c53d7e18178d7aae1e017f1b18c6665a07df. Jan 27 12:53:42.820000 audit: BPF prog-id=146 op=LOAD Jan 27 12:53:42.820000 audit: BPF prog-id=147 op=LOAD Jan 27 12:53:42.820000 audit[3092]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2880 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:42.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364613133313133366634363238363934353130306465343032303263 Jan 27 12:53:42.821000 audit: BPF prog-id=147 op=UNLOAD Jan 27 12:53:42.821000 audit[3092]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2880 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:42.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364613133313133366634363238363934353130306465343032303263 Jan 27 12:53:42.821000 audit: BPF prog-id=148 op=LOAD Jan 27 12:53:42.821000 audit[3092]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2880 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:42.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364613133313133366634363238363934353130306465343032303263 Jan 27 12:53:42.821000 audit: BPF prog-id=149 op=LOAD Jan 27 12:53:42.821000 audit[3092]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2880 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:42.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364613133313133366634363238363934353130306465343032303263 Jan 27 12:53:42.821000 audit: BPF prog-id=149 op=UNLOAD Jan 27 12:53:42.821000 audit[3092]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2880 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:42.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364613133313133366634363238363934353130306465343032303263 Jan 27 12:53:42.821000 audit: BPF prog-id=148 op=UNLOAD Jan 27 12:53:42.821000 audit[3092]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2880 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:42.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364613133313133366634363238363934353130306465343032303263 Jan 27 12:53:42.821000 audit: BPF prog-id=150 op=LOAD Jan 27 12:53:42.821000 audit[3092]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2880 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:42.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364613133313133366634363238363934353130306465343032303263 Jan 27 12:53:42.874287 containerd[1598]: time="2026-01-27T12:53:42.874168808Z" level=info msg="StartContainer for \"3da131136f46286945100de40202c53d7e18178d7aae1e017f1b18c6665a07df\" returns successfully" Jan 27 12:53:43.056089 kubelet[2768]: I0127 12:53:43.054553 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dcr5j" podStartSLOduration=4.054536729 podStartE2EDuration="4.054536729s" podCreationTimestamp="2026-01-27 12:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:53:41.050677794 +0000 UTC m=+8.224588225" watchObservedRunningTime="2026-01-27 12:53:43.054536729 +0000 UTC m=+10.228447160" Jan 27 12:53:43.491987 kubelet[2768]: E0127 12:53:43.491739 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:43.508454 kubelet[2768]: I0127 12:53:43.508327 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-4pkwl" podStartSLOduration=1.9543601339999999 podStartE2EDuration="4.508313082s" podCreationTimestamp="2026-01-27 12:53:39 +0000 UTC" firstStartedPulling="2026-01-27 12:53:40.191833529 +0000 UTC m=+7.365743960" lastFinishedPulling="2026-01-27 12:53:42.745786477 +0000 UTC m=+9.919696908" observedRunningTime="2026-01-27 12:53:43.055625673 +0000 UTC m=+10.229536103" watchObservedRunningTime="2026-01-27 12:53:43.508313082 +0000 UTC m=+10.682223513" Jan 27 12:53:44.779243 kubelet[2768]: E0127 12:53:44.778982 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:45.049284 kubelet[2768]: E0127 12:53:45.049149 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:47.190582 kubelet[2768]: E0127 12:53:47.190430 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:48.057157 kubelet[2768]: E0127 12:53:48.057010 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:48.540000 audit[1820]: USER_END pid=1820 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 12:53:48.541069 sudo[1820]: pam_unix(sudo:session): session closed for user root Jan 27 12:53:48.561223 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 27 12:53:48.561363 kernel: audit: type=1106 audit(1769518428.540:513): pid=1820 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 12:53:48.550227 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Jan 27 12:53:48.561827 sshd[1819]: Connection closed by 10.0.0.1 port 53980 Jan 27 12:53:48.540000 audit[1820]: CRED_DISP pid=1820 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 12:53:48.574535 systemd[1]: sshd@6-10.0.0.130:22-10.0.0.1:53980.service: Deactivated successfully. Jan 27 12:53:48.580608 systemd[1]: session-8.scope: Deactivated successfully. Jan 27 12:53:48.582525 systemd[1]: session-8.scope: Consumed 6.664s CPU time, 221.5M memory peak. Jan 27 12:53:48.569000 audit[1815]: USER_END pid=1815 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:53:48.588868 systemd-logind[1575]: Session 8 logged out. Waiting for processes to exit. Jan 27 12:53:48.593870 systemd-logind[1575]: Removed session 8. Jan 27 12:53:48.606253 kernel: audit: type=1104 audit(1769518428.540:514): pid=1820 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 12:53:48.606345 kernel: audit: type=1106 audit(1769518428.569:515): pid=1815 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:53:48.569000 audit[1815]: CRED_DISP pid=1815 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:53:48.619158 kernel: audit: type=1104 audit(1769518428.569:516): pid=1815 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:53:48.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.130:22-10.0.0.1:53980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:48.632221 kernel: audit: type=1131 audit(1769518428.574:517): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.130:22-10.0.0.1:53980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:53:48.992000 audit[3182]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:53:49.005731 kernel: audit: type=1325 audit(1769518428.992:518): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:53:48.992000 audit[3182]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff2e5dc4f0 a2=0 a3=7fff2e5dc4dc items=0 ppid=2925 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:48.992000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:53:49.029320 kernel: audit: type=1300 audit(1769518428.992:518): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff2e5dc4f0 a2=0 a3=7fff2e5dc4dc items=0 ppid=2925 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:49.029391 kernel: audit: type=1327 audit(1769518428.992:518): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:53:49.030662 kernel: audit: type=1325 audit(1769518429.006:519): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:53:49.006000 audit[3182]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:53:49.006000 audit[3182]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff2e5dc4f0 a2=0 a3=0 items=0 ppid=2925 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:49.048955 kernel: audit: type=1300 audit(1769518429.006:519): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff2e5dc4f0 a2=0 a3=0 items=0 ppid=2925 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:49.006000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:53:50.052000 audit[3184]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:53:50.052000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc4fdf6960 a2=0 a3=7ffc4fdf694c items=0 ppid=2925 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:50.052000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:53:50.058000 audit[3184]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:53:50.058000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc4fdf6960 a2=0 a3=0 items=0 ppid=2925 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:50.058000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:53:51.285945 update_engine[1579]: I20260127 12:53:51.285777 1579 update_attempter.cc:509] Updating boot flags... Jan 27 12:53:52.021000 audit[3205]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:53:52.021000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd334d5520 a2=0 a3=7ffd334d550c items=0 ppid=2925 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:52.021000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:53:52.026000 audit[3205]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:53:52.026000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd334d5520 a2=0 a3=0 items=0 ppid=2925 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:52.026000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:53:53.045000 audit[3207]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:53:53.045000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc5c578f80 a2=0 a3=7ffc5c578f6c items=0 ppid=2925 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:53.045000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:53:53.052000 audit[3207]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:53:53.052000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc5c578f80 a2=0 a3=0 items=0 ppid=2925 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:53.052000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:53:53.804449 systemd[1]: Created slice kubepods-besteffort-pod6084f594_e856_4f2b_857b_18678aae5874.slice - libcontainer container kubepods-besteffort-pod6084f594_e856_4f2b_857b_18678aae5874.slice. Jan 27 12:53:53.814222 kubelet[2768]: I0127 12:53:53.814138 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6084f594-e856-4f2b-857b-18678aae5874-typha-certs\") pod \"calico-typha-5c59b88c84-pn4cl\" (UID: \"6084f594-e856-4f2b-857b-18678aae5874\") " pod="calico-system/calico-typha-5c59b88c84-pn4cl" Jan 27 12:53:53.814636 kubelet[2768]: I0127 12:53:53.814225 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6084f594-e856-4f2b-857b-18678aae5874-tigera-ca-bundle\") pod \"calico-typha-5c59b88c84-pn4cl\" (UID: \"6084f594-e856-4f2b-857b-18678aae5874\") " pod="calico-system/calico-typha-5c59b88c84-pn4cl" Jan 27 12:53:53.814636 kubelet[2768]: I0127 12:53:53.814249 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdl89\" (UniqueName: \"kubernetes.io/projected/6084f594-e856-4f2b-857b-18678aae5874-kube-api-access-gdl89\") pod \"calico-typha-5c59b88c84-pn4cl\" (UID: \"6084f594-e856-4f2b-857b-18678aae5874\") " pod="calico-system/calico-typha-5c59b88c84-pn4cl" Jan 27 12:53:54.027131 systemd[1]: Created slice kubepods-besteffort-pod110044ff_8f24_4c1b_88ab_d1d21f4349ba.slice - libcontainer container kubepods-besteffort-pod110044ff_8f24_4c1b_88ab_d1d21f4349ba.slice. Jan 27 12:53:54.080000 audit[3211]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:53:54.086140 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 27 12:53:54.086210 kernel: audit: type=1325 audit(1769518434.080:526): table=filter:113 family=2 entries=21 op=nft_register_rule pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:53:54.080000 audit[3211]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffda91835a0 a2=0 a3=7ffda918358c items=0 ppid=2925 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:54.111342 kubelet[2768]: E0127 12:53:54.111229 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:54.112822 containerd[1598]: time="2026-01-27T12:53:54.112523516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c59b88c84-pn4cl,Uid:6084f594-e856-4f2b-857b-18678aae5874,Namespace:calico-system,Attempt:0,}" Jan 27 12:53:54.114412 kernel: audit: type=1300 audit(1769518434.080:526): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffda91835a0 a2=0 a3=7ffda918358c items=0 ppid=2925 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:54.114455 kernel: audit: type=1327 audit(1769518434.080:526): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:53:54.080000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:53:54.119268 kubelet[2768]: I0127 12:53:54.119236 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/110044ff-8f24-4c1b-88ab-d1d21f4349ba-cni-bin-dir\") pod \"calico-node-8lxcz\" (UID: \"110044ff-8f24-4c1b-88ab-d1d21f4349ba\") " pod="calico-system/calico-node-8lxcz" Jan 27 12:53:54.119760 kubelet[2768]: I0127 12:53:54.119715 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/110044ff-8f24-4c1b-88ab-d1d21f4349ba-flexvol-driver-host\") pod \"calico-node-8lxcz\" (UID: \"110044ff-8f24-4c1b-88ab-d1d21f4349ba\") " pod="calico-system/calico-node-8lxcz" Jan 27 12:53:54.120086 kubelet[2768]: I0127 12:53:54.119988 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/110044ff-8f24-4c1b-88ab-d1d21f4349ba-cni-log-dir\") pod \"calico-node-8lxcz\" (UID: \"110044ff-8f24-4c1b-88ab-d1d21f4349ba\") " pod="calico-system/calico-node-8lxcz" Jan 27 12:53:54.120603 kubelet[2768]: I0127 12:53:54.120361 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/110044ff-8f24-4c1b-88ab-d1d21f4349ba-node-certs\") pod \"calico-node-8lxcz\" (UID: \"110044ff-8f24-4c1b-88ab-d1d21f4349ba\") " pod="calico-system/calico-node-8lxcz" Jan 27 12:53:54.120603 kubelet[2768]: I0127 12:53:54.120496 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/110044ff-8f24-4c1b-88ab-d1d21f4349ba-xtables-lock\") pod \"calico-node-8lxcz\" (UID: \"110044ff-8f24-4c1b-88ab-d1d21f4349ba\") " pod="calico-system/calico-node-8lxcz" Jan 27 12:53:54.120963 kubelet[2768]: I0127 12:53:54.120781 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/110044ff-8f24-4c1b-88ab-d1d21f4349ba-var-run-calico\") pod \"calico-node-8lxcz\" (UID: \"110044ff-8f24-4c1b-88ab-d1d21f4349ba\") " pod="calico-system/calico-node-8lxcz" Jan 27 12:53:54.120963 kubelet[2768]: I0127 12:53:54.120816 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/110044ff-8f24-4c1b-88ab-d1d21f4349ba-tigera-ca-bundle\") pod \"calico-node-8lxcz\" (UID: \"110044ff-8f24-4c1b-88ab-d1d21f4349ba\") " pod="calico-system/calico-node-8lxcz" Jan 27 12:53:54.120963 kubelet[2768]: I0127 12:53:54.120840 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/110044ff-8f24-4c1b-88ab-d1d21f4349ba-lib-modules\") pod \"calico-node-8lxcz\" (UID: \"110044ff-8f24-4c1b-88ab-d1d21f4349ba\") " pod="calico-system/calico-node-8lxcz" Jan 27 12:53:54.120963 kubelet[2768]: I0127 12:53:54.120861 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/110044ff-8f24-4c1b-88ab-d1d21f4349ba-policysync\") pod \"calico-node-8lxcz\" (UID: \"110044ff-8f24-4c1b-88ab-d1d21f4349ba\") " pod="calico-system/calico-node-8lxcz" Jan 27 12:53:54.121103 kubelet[2768]: I0127 12:53:54.120988 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/110044ff-8f24-4c1b-88ab-d1d21f4349ba-var-lib-calico\") pod \"calico-node-8lxcz\" (UID: \"110044ff-8f24-4c1b-88ab-d1d21f4349ba\") " pod="calico-system/calico-node-8lxcz" Jan 27 12:53:54.121103 kubelet[2768]: I0127 12:53:54.121026 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/110044ff-8f24-4c1b-88ab-d1d21f4349ba-cni-net-dir\") pod \"calico-node-8lxcz\" (UID: \"110044ff-8f24-4c1b-88ab-d1d21f4349ba\") " pod="calico-system/calico-node-8lxcz" Jan 27 12:53:54.121103 kubelet[2768]: I0127 12:53:54.121047 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mgxj\" (UniqueName: \"kubernetes.io/projected/110044ff-8f24-4c1b-88ab-d1d21f4349ba-kube-api-access-6mgxj\") pod \"calico-node-8lxcz\" (UID: \"110044ff-8f24-4c1b-88ab-d1d21f4349ba\") " pod="calico-system/calico-node-8lxcz" Jan 27 12:53:54.123000 audit[3211]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:53:54.123000 audit[3211]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffda91835a0 a2=0 a3=0 items=0 ppid=2925 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:54.148970 kernel: audit: type=1325 audit(1769518434.123:527): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:53:54.149013 kernel: audit: type=1300 audit(1769518434.123:527): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffda91835a0 a2=0 a3=0 items=0 ppid=2925 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:54.154208 kernel: audit: type=1327 audit(1769518434.123:527): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:53:54.123000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:53:54.155738 containerd[1598]: time="2026-01-27T12:53:54.155543049Z" level=info msg="connecting to shim ffc6f1d26f20c642083edf56b2b9213337df1b16ca3109ad991499ebf5bae77b" address="unix:///run/containerd/s/9a1dafa4ca19c18c7040af972813049470f314f111522580f75e52718bb16d6e" namespace=k8s.io protocol=ttrpc version=3 Jan 27 12:53:54.226457 systemd[1]: Started cri-containerd-ffc6f1d26f20c642083edf56b2b9213337df1b16ca3109ad991499ebf5bae77b.scope - libcontainer container ffc6f1d26f20c642083edf56b2b9213337df1b16ca3109ad991499ebf5bae77b. Jan 27 12:53:54.231659 kubelet[2768]: E0127 12:53:54.231500 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.231659 kubelet[2768]: W0127 12:53:54.231569 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.231659 kubelet[2768]: E0127 12:53:54.231601 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.235201 kubelet[2768]: E0127 12:53:54.235112 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5vwvj" podUID="6af69036-827e-49bb-8e7c-3940b856830f" Jan 27 12:53:54.257802 kubelet[2768]: E0127 12:53:54.257743 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.257802 kubelet[2768]: W0127 12:53:54.257767 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.257802 kubelet[2768]: E0127 12:53:54.257788 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.269760 kubelet[2768]: E0127 12:53:54.269620 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.269760 kubelet[2768]: W0127 12:53:54.269644 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.269760 kubelet[2768]: E0127 12:53:54.269668 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.273000 audit: BPF prog-id=151 op=LOAD Jan 27 12:53:54.275000 audit: BPF prog-id=152 op=LOAD Jan 27 12:53:54.285409 kernel: audit: type=1334 audit(1769518434.273:528): prog-id=151 op=LOAD Jan 27 12:53:54.285508 kernel: audit: type=1334 audit(1769518434.275:529): prog-id=152 op=LOAD Jan 27 12:53:54.275000 audit[3232]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3220 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:54.306084 kernel: audit: type=1300 audit(1769518434.275:529): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3220 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:54.306161 kernel: audit: type=1327 audit(1769518434.275:529): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666633666316432366632306336343230383365646635366232623932 Jan 27 12:53:54.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666633666316432366632306336343230383365646635366232623932 Jan 27 12:53:54.275000 audit: BPF prog-id=152 op=UNLOAD Jan 27 12:53:54.325759 kubelet[2768]: E0127 12:53:54.325615 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.325759 kubelet[2768]: W0127 12:53:54.325639 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.325759 kubelet[2768]: E0127 12:53:54.325662 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.275000 audit[3232]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3220 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:54.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666633666316432366632306336343230383365646635366232623932 Jan 27 12:53:54.275000 audit: BPF prog-id=153 op=LOAD Jan 27 12:53:54.275000 audit[3232]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3220 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:54.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666633666316432366632306336343230383365646635366232623932 Jan 27 12:53:54.275000 audit: BPF prog-id=154 op=LOAD Jan 27 12:53:54.275000 audit[3232]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3220 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:54.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666633666316432366632306336343230383365646635366232623932 Jan 27 12:53:54.275000 audit: BPF prog-id=154 op=UNLOAD Jan 27 12:53:54.275000 audit[3232]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3220 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:54.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666633666316432366632306336343230383365646635366232623932 Jan 27 12:53:54.275000 audit: BPF prog-id=153 op=UNLOAD Jan 27 12:53:54.275000 audit[3232]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3220 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:54.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666633666316432366632306336343230383365646635366232623932 Jan 27 12:53:54.275000 audit: BPF prog-id=155 op=LOAD Jan 27 12:53:54.275000 audit[3232]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3220 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:54.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666633666316432366632306336343230383365646635366232623932 Jan 27 12:53:54.328190 kubelet[2768]: E0127 12:53:54.328066 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.328190 kubelet[2768]: W0127 12:53:54.328103 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.328190 kubelet[2768]: E0127 12:53:54.328120 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.329717 kubelet[2768]: E0127 12:53:54.329561 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.329717 kubelet[2768]: W0127 12:53:54.329599 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.329717 kubelet[2768]: E0127 12:53:54.329612 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.330985 kubelet[2768]: E0127 12:53:54.330844 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.330985 kubelet[2768]: W0127 12:53:54.330882 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.330985 kubelet[2768]: E0127 12:53:54.330947 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.332398 kubelet[2768]: E0127 12:53:54.332329 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.332398 kubelet[2768]: W0127 12:53:54.332364 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.332398 kubelet[2768]: E0127 12:53:54.332374 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.334029 kubelet[2768]: E0127 12:53:54.333997 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.334029 kubelet[2768]: W0127 12:53:54.334007 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.334029 kubelet[2768]: E0127 12:53:54.334017 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.335083 kubelet[2768]: E0127 12:53:54.335012 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.335083 kubelet[2768]: W0127 12:53:54.335050 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.335083 kubelet[2768]: E0127 12:53:54.335060 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.337164 kubelet[2768]: E0127 12:53:54.337027 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.337164 kubelet[2768]: W0127 12:53:54.337072 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.337164 kubelet[2768]: E0127 12:53:54.337084 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.338029 kubelet[2768]: E0127 12:53:54.337397 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.338029 kubelet[2768]: W0127 12:53:54.337408 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.338029 kubelet[2768]: E0127 12:53:54.337417 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.339008 kubelet[2768]: E0127 12:53:54.338984 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.339008 kubelet[2768]: W0127 12:53:54.338997 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.339008 kubelet[2768]: E0127 12:53:54.339006 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.339429 kubelet[2768]: E0127 12:53:54.339327 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.339429 kubelet[2768]: W0127 12:53:54.339343 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.339429 kubelet[2768]: E0127 12:53:54.339353 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.340569 kubelet[2768]: E0127 12:53:54.339983 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.340569 kubelet[2768]: W0127 12:53:54.339995 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.340569 kubelet[2768]: E0127 12:53:54.340004 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.341865 kubelet[2768]: E0127 12:53:54.341755 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.341865 kubelet[2768]: W0127 12:53:54.341769 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.341865 kubelet[2768]: E0127 12:53:54.341780 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.342775 kubelet[2768]: E0127 12:53:54.342182 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.342775 kubelet[2768]: W0127 12:53:54.342193 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.342775 kubelet[2768]: E0127 12:53:54.342206 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.343443 kubelet[2768]: E0127 12:53:54.343301 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.343443 kubelet[2768]: W0127 12:53:54.343315 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.343443 kubelet[2768]: E0127 12:53:54.343326 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.344150 kubelet[2768]: E0127 12:53:54.344102 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.344150 kubelet[2768]: W0127 12:53:54.344143 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.344534 kubelet[2768]: E0127 12:53:54.344153 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.344534 kubelet[2768]: I0127 12:53:54.344482 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6af69036-827e-49bb-8e7c-3940b856830f-kubelet-dir\") pod \"csi-node-driver-5vwvj\" (UID: \"6af69036-827e-49bb-8e7c-3940b856830f\") " pod="calico-system/csi-node-driver-5vwvj" Jan 27 12:53:54.345012 kubelet[2768]: E0127 12:53:54.344856 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.345012 kubelet[2768]: W0127 12:53:54.344987 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.345012 kubelet[2768]: E0127 12:53:54.344999 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.347239 kubelet[2768]: I0127 12:53:54.347189 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6af69036-827e-49bb-8e7c-3940b856830f-registration-dir\") pod \"csi-node-driver-5vwvj\" (UID: \"6af69036-827e-49bb-8e7c-3940b856830f\") " pod="calico-system/csi-node-driver-5vwvj" Jan 27 12:53:54.347793 kubelet[2768]: E0127 12:53:54.347719 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.347793 kubelet[2768]: W0127 12:53:54.347758 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.347793 kubelet[2768]: E0127 12:53:54.347769 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.348617 kubelet[2768]: E0127 12:53:54.348543 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.348617 kubelet[2768]: W0127 12:53:54.348582 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.348617 kubelet[2768]: E0127 12:53:54.348592 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.351005 kubelet[2768]: E0127 12:53:54.350141 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.351005 kubelet[2768]: W0127 12:53:54.350157 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.351005 kubelet[2768]: E0127 12:53:54.350170 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.351859 kubelet[2768]: E0127 12:53:54.351794 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.352184 kubelet[2768]: W0127 12:53:54.352111 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.353010 kubelet[2768]: E0127 12:53:54.352982 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.354026 kubelet[2768]: I0127 12:53:54.353971 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6af69036-827e-49bb-8e7c-3940b856830f-socket-dir\") pod \"csi-node-driver-5vwvj\" (UID: \"6af69036-827e-49bb-8e7c-3940b856830f\") " pod="calico-system/csi-node-driver-5vwvj" Jan 27 12:53:54.354176 kubelet[2768]: E0127 12:53:54.354121 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:54.356179 kubelet[2768]: E0127 12:53:54.356147 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.356179 kubelet[2768]: W0127 12:53:54.356159 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.356179 kubelet[2768]: E0127 12:53:54.356171 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.358199 kubelet[2768]: E0127 12:53:54.358102 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.358199 kubelet[2768]: W0127 12:53:54.358145 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.358199 kubelet[2768]: E0127 12:53:54.358160 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.358962 kubelet[2768]: E0127 12:53:54.358826 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.358962 kubelet[2768]: W0127 12:53:54.358840 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.358962 kubelet[2768]: E0127 12:53:54.358851 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.359594 containerd[1598]: time="2026-01-27T12:53:54.359463510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8lxcz,Uid:110044ff-8f24-4c1b-88ab-d1d21f4349ba,Namespace:calico-system,Attempt:0,}" Jan 27 12:53:54.360228 kubelet[2768]: E0127 12:53:54.360084 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.360228 kubelet[2768]: W0127 12:53:54.360137 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.360228 kubelet[2768]: E0127 12:53:54.360151 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.364176 kubelet[2768]: E0127 12:53:54.364148 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.364588 kubelet[2768]: W0127 12:53:54.364380 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.364588 kubelet[2768]: E0127 12:53:54.364400 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.366043 kubelet[2768]: E0127 12:53:54.366025 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.366155 kubelet[2768]: W0127 12:53:54.366136 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.366249 kubelet[2768]: E0127 12:53:54.366232 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.366801 kubelet[2768]: E0127 12:53:54.366594 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.366801 kubelet[2768]: W0127 12:53:54.366610 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.366801 kubelet[2768]: E0127 12:53:54.366632 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.367237 kubelet[2768]: E0127 12:53:54.367221 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.367329 kubelet[2768]: W0127 12:53:54.367313 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.367401 kubelet[2768]: E0127 12:53:54.367385 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.423994 containerd[1598]: time="2026-01-27T12:53:54.423752330Z" level=info msg="connecting to shim c7bf6c9f00bee07cb3ee5c9f253d6f464abd89d59e0c89ba528b7ca3cda33fbd" address="unix:///run/containerd/s/08035aa0bf85a400f1a0a21e3f7f998f097e96b6e69a66983a596e5d4d20918c" namespace=k8s.io protocol=ttrpc version=3 Jan 27 12:53:54.438518 containerd[1598]: time="2026-01-27T12:53:54.438488407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c59b88c84-pn4cl,Uid:6084f594-e856-4f2b-857b-18678aae5874,Namespace:calico-system,Attempt:0,} returns sandbox id \"ffc6f1d26f20c642083edf56b2b9213337df1b16ca3109ad991499ebf5bae77b\"" Jan 27 12:53:54.440476 kubelet[2768]: E0127 12:53:54.439884 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:54.442671 containerd[1598]: time="2026-01-27T12:53:54.442649366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 27 12:53:54.468218 systemd[1]: Started cri-containerd-c7bf6c9f00bee07cb3ee5c9f253d6f464abd89d59e0c89ba528b7ca3cda33fbd.scope - libcontainer container c7bf6c9f00bee07cb3ee5c9f253d6f464abd89d59e0c89ba528b7ca3cda33fbd. Jan 27 12:53:54.469243 kubelet[2768]: E0127 12:53:54.469163 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.469517 kubelet[2768]: W0127 12:53:54.469223 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.469517 kubelet[2768]: E0127 12:53:54.469368 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.470829 kubelet[2768]: E0127 12:53:54.470636 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.470829 kubelet[2768]: W0127 12:53:54.470651 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.471102 kubelet[2768]: E0127 12:53:54.471001 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.471962 kubelet[2768]: E0127 12:53:54.471794 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.471962 kubelet[2768]: W0127 12:53:54.471828 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.471962 kubelet[2768]: E0127 12:53:54.471838 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.473162 kubelet[2768]: E0127 12:53:54.473015 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.473383 kubelet[2768]: W0127 12:53:54.473354 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.473383 kubelet[2768]: E0127 12:53:54.473370 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.474418 kubelet[2768]: I0127 12:53:54.474282 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6af69036-827e-49bb-8e7c-3940b856830f-varrun\") pod \"csi-node-driver-5vwvj\" (UID: \"6af69036-827e-49bb-8e7c-3940b856830f\") " pod="calico-system/csi-node-driver-5vwvj" Jan 27 12:53:54.475788 kubelet[2768]: E0127 12:53:54.475219 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.476483 kubelet[2768]: W0127 12:53:54.476363 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.476992 kubelet[2768]: E0127 12:53:54.476965 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.479257 kubelet[2768]: E0127 12:53:54.479223 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.479580 kubelet[2768]: W0127 12:53:54.479254 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.479610 kubelet[2768]: E0127 12:53:54.479583 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.482147 kubelet[2768]: E0127 12:53:54.482049 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.482147 kubelet[2768]: W0127 12:53:54.482062 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.482147 kubelet[2768]: E0127 12:53:54.482071 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.482336 kubelet[2768]: I0127 12:53:54.482314 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c688\" (UniqueName: \"kubernetes.io/projected/6af69036-827e-49bb-8e7c-3940b856830f-kube-api-access-4c688\") pod \"csi-node-driver-5vwvj\" (UID: \"6af69036-827e-49bb-8e7c-3940b856830f\") " pod="calico-system/csi-node-driver-5vwvj" Jan 27 12:53:54.484282 kubelet[2768]: E0127 12:53:54.483747 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.484282 kubelet[2768]: W0127 12:53:54.483759 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.484282 kubelet[2768]: E0127 12:53:54.483769 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.485088 kubelet[2768]: E0127 12:53:54.485053 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.485088 kubelet[2768]: W0127 12:53:54.485065 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.485088 kubelet[2768]: E0127 12:53:54.485076 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.486436 kubelet[2768]: E0127 12:53:54.486387 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.486502 kubelet[2768]: W0127 12:53:54.486489 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.486526 kubelet[2768]: E0127 12:53:54.486505 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.487740 kubelet[2768]: E0127 12:53:54.487664 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.487999 kubelet[2768]: W0127 12:53:54.487885 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.487999 kubelet[2768]: E0127 12:53:54.487997 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.488838 kubelet[2768]: E0127 12:53:54.488804 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.488838 kubelet[2768]: W0127 12:53:54.488833 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.489017 kubelet[2768]: E0127 12:53:54.488983 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.489814 kubelet[2768]: E0127 12:53:54.489714 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.489814 kubelet[2768]: W0127 12:53:54.489743 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.489814 kubelet[2768]: E0127 12:53:54.489807 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.490324 kubelet[2768]: E0127 12:53:54.490267 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.490324 kubelet[2768]: W0127 12:53:54.490306 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.490324 kubelet[2768]: E0127 12:53:54.490316 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.491283 kubelet[2768]: E0127 12:53:54.491227 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.491283 kubelet[2768]: W0127 12:53:54.491280 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.491348 kubelet[2768]: E0127 12:53:54.491290 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.491758 kubelet[2768]: E0127 12:53:54.491704 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.491758 kubelet[2768]: W0127 12:53:54.491735 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.491758 kubelet[2768]: E0127 12:53:54.491745 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.492328 kubelet[2768]: E0127 12:53:54.492274 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.492328 kubelet[2768]: W0127 12:53:54.492323 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.492388 kubelet[2768]: E0127 12:53:54.492338 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.492839 kubelet[2768]: E0127 12:53:54.492789 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.492885 kubelet[2768]: W0127 12:53:54.492873 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.492989 kubelet[2768]: E0127 12:53:54.492886 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.493537 kubelet[2768]: E0127 12:53:54.493482 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.493537 kubelet[2768]: W0127 12:53:54.493514 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.493537 kubelet[2768]: E0127 12:53:54.493524 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.494102 kubelet[2768]: E0127 12:53:54.494001 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.494138 kubelet[2768]: W0127 12:53:54.494129 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.494159 kubelet[2768]: E0127 12:53:54.494143 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.494834 kubelet[2768]: E0127 12:53:54.494784 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.494834 kubelet[2768]: W0127 12:53:54.494814 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.494834 kubelet[2768]: E0127 12:53:54.494824 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.497000 audit: BPF prog-id=156 op=LOAD Jan 27 12:53:54.498000 audit: BPF prog-id=157 op=LOAD Jan 27 12:53:54.498000 audit[3319]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3307 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:54.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337626636633966303062656530376362336565356339663235336436 Jan 27 12:53:54.498000 audit: BPF prog-id=157 op=UNLOAD Jan 27 12:53:54.498000 audit[3319]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:54.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337626636633966303062656530376362336565356339663235336436 Jan 27 12:53:54.498000 audit: BPF prog-id=158 op=LOAD Jan 27 12:53:54.498000 audit[3319]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3307 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:54.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337626636633966303062656530376362336565356339663235336436 Jan 27 12:53:54.498000 audit: BPF prog-id=159 op=LOAD Jan 27 12:53:54.498000 audit[3319]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3307 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:54.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337626636633966303062656530376362336565356339663235336436 Jan 27 12:53:54.498000 audit: BPF prog-id=159 op=UNLOAD Jan 27 12:53:54.498000 audit[3319]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:54.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337626636633966303062656530376362336565356339663235336436 Jan 27 12:53:54.498000 audit: BPF prog-id=158 op=UNLOAD Jan 27 12:53:54.498000 audit[3319]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:54.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337626636633966303062656530376362336565356339663235336436 Jan 27 12:53:54.498000 audit: BPF prog-id=160 op=LOAD Jan 27 12:53:54.498000 audit[3319]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3307 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:54.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337626636633966303062656530376362336565356339663235336436 Jan 27 12:53:54.526816 containerd[1598]: time="2026-01-27T12:53:54.526627181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8lxcz,Uid:110044ff-8f24-4c1b-88ab-d1d21f4349ba,Namespace:calico-system,Attempt:0,} returns sandbox id \"c7bf6c9f00bee07cb3ee5c9f253d6f464abd89d59e0c89ba528b7ca3cda33fbd\"" Jan 27 12:53:54.528435 kubelet[2768]: E0127 12:53:54.528344 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:54.592627 kubelet[2768]: E0127 12:53:54.591586 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.592627 kubelet[2768]: W0127 12:53:54.591633 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.592627 kubelet[2768]: E0127 12:53:54.591651 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.593076 kubelet[2768]: E0127 12:53:54.593009 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.593076 kubelet[2768]: W0127 12:53:54.593063 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.593164 kubelet[2768]: E0127 12:53:54.593080 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.593792 kubelet[2768]: E0127 12:53:54.593563 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.593792 kubelet[2768]: W0127 12:53:54.593787 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.593858 kubelet[2768]: E0127 12:53:54.593804 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.594754 kubelet[2768]: E0127 12:53:54.594576 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.594754 kubelet[2768]: W0127 12:53:54.594621 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.594754 kubelet[2768]: E0127 12:53:54.594636 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.595651 kubelet[2768]: E0127 12:53:54.595226 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.595651 kubelet[2768]: W0127 12:53:54.595239 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.595651 kubelet[2768]: E0127 12:53:54.595251 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.596338 kubelet[2768]: E0127 12:53:54.596267 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.596338 kubelet[2768]: W0127 12:53:54.596303 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.596338 kubelet[2768]: E0127 12:53:54.596314 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.596809 kubelet[2768]: E0127 12:53:54.596763 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.596858 kubelet[2768]: W0127 12:53:54.596814 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.596858 kubelet[2768]: E0127 12:53:54.596829 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.597398 kubelet[2768]: E0127 12:53:54.597354 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.597456 kubelet[2768]: W0127 12:53:54.597400 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.597456 kubelet[2768]: E0127 12:53:54.597413 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.598018 kubelet[2768]: E0127 12:53:54.597950 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.598018 kubelet[2768]: W0127 12:53:54.597963 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.598018 kubelet[2768]: E0127 12:53:54.597974 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.599035 kubelet[2768]: E0127 12:53:54.598801 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.599035 kubelet[2768]: W0127 12:53:54.598813 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.599035 kubelet[2768]: E0127 12:53:54.598825 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:54.609287 kubelet[2768]: E0127 12:53:54.609219 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:54.609287 kubelet[2768]: W0127 12:53:54.609252 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:54.609287 kubelet[2768]: E0127 12:53:54.609263 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:55.590091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2705336721.mount: Deactivated successfully. Jan 27 12:53:55.986234 kubelet[2768]: E0127 12:53:55.986074 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5vwvj" podUID="6af69036-827e-49bb-8e7c-3940b856830f" Jan 27 12:53:56.293257 containerd[1598]: time="2026-01-27T12:53:56.292992347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:56.294182 containerd[1598]: time="2026-01-27T12:53:56.294141019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 27 12:53:56.296114 containerd[1598]: time="2026-01-27T12:53:56.295885986Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:56.298668 containerd[1598]: time="2026-01-27T12:53:56.298562952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:56.299538 containerd[1598]: time="2026-01-27T12:53:56.299501607Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.856639536s" Jan 27 12:53:56.299538 containerd[1598]: time="2026-01-27T12:53:56.299531913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 27 12:53:56.302386 containerd[1598]: time="2026-01-27T12:53:56.302199624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 27 12:53:56.318253 containerd[1598]: time="2026-01-27T12:53:56.317766701Z" level=info msg="CreateContainer within sandbox \"ffc6f1d26f20c642083edf56b2b9213337df1b16ca3109ad991499ebf5bae77b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 27 12:53:56.332626 containerd[1598]: time="2026-01-27T12:53:56.332596057Z" level=info msg="Container 3b6294577b899dc60fab6b8fe737d461364ce4784689a9fe586bc4f588c79581: CDI devices from CRI Config.CDIDevices: []" Jan 27 12:53:56.342517 containerd[1598]: time="2026-01-27T12:53:56.342404179Z" level=info msg="CreateContainer within sandbox \"ffc6f1d26f20c642083edf56b2b9213337df1b16ca3109ad991499ebf5bae77b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3b6294577b899dc60fab6b8fe737d461364ce4784689a9fe586bc4f588c79581\"" Jan 27 12:53:56.343495 containerd[1598]: time="2026-01-27T12:53:56.343363354Z" level=info msg="StartContainer for \"3b6294577b899dc60fab6b8fe737d461364ce4784689a9fe586bc4f588c79581\"" Jan 27 12:53:56.344822 containerd[1598]: time="2026-01-27T12:53:56.344739539Z" level=info msg="connecting to shim 3b6294577b899dc60fab6b8fe737d461364ce4784689a9fe586bc4f588c79581" address="unix:///run/containerd/s/9a1dafa4ca19c18c7040af972813049470f314f111522580f75e52718bb16d6e" protocol=ttrpc version=3 Jan 27 12:53:56.373504 systemd[1]: Started cri-containerd-3b6294577b899dc60fab6b8fe737d461364ce4784689a9fe586bc4f588c79581.scope - libcontainer container 3b6294577b899dc60fab6b8fe737d461364ce4784689a9fe586bc4f588c79581. Jan 27 12:53:56.399000 audit: BPF prog-id=161 op=LOAD Jan 27 12:53:56.400000 audit: BPF prog-id=162 op=LOAD Jan 27 12:53:56.400000 audit[3388]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3220 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:56.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362363239343537376238393964633630666162366238666537333764 Jan 27 12:53:56.400000 audit: BPF prog-id=162 op=UNLOAD Jan 27 12:53:56.400000 audit[3388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3220 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:56.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362363239343537376238393964633630666162366238666537333764 Jan 27 12:53:56.401000 audit: BPF prog-id=163 op=LOAD Jan 27 12:53:56.401000 audit[3388]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3220 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:56.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362363239343537376238393964633630666162366238666537333764 Jan 27 12:53:56.401000 audit: BPF prog-id=164 op=LOAD Jan 27 12:53:56.401000 audit[3388]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3220 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:56.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362363239343537376238393964633630666162366238666537333764 Jan 27 12:53:56.402000 audit: BPF prog-id=164 op=UNLOAD Jan 27 12:53:56.402000 audit[3388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3220 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:56.402000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362363239343537376238393964633630666162366238666537333764 Jan 27 12:53:56.402000 audit: BPF prog-id=163 op=UNLOAD Jan 27 12:53:56.402000 audit[3388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3220 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:56.402000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362363239343537376238393964633630666162366238666537333764 Jan 27 12:53:56.402000 audit: BPF prog-id=165 op=LOAD Jan 27 12:53:56.402000 audit[3388]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3220 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:56.402000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362363239343537376238393964633630666162366238666537333764 Jan 27 12:53:56.468000 containerd[1598]: time="2026-01-27T12:53:56.467606985Z" level=info msg="StartContainer for \"3b6294577b899dc60fab6b8fe737d461364ce4784689a9fe586bc4f588c79581\" returns successfully" Jan 27 12:53:56.942096 containerd[1598]: time="2026-01-27T12:53:56.942056720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:56.943744 containerd[1598]: time="2026-01-27T12:53:56.943720043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 27 12:53:56.945080 containerd[1598]: time="2026-01-27T12:53:56.944996325Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:56.947570 containerd[1598]: time="2026-01-27T12:53:56.947505853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:53:56.948606 containerd[1598]: time="2026-01-27T12:53:56.948526429Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 646.247468ms" Jan 27 12:53:56.948665 containerd[1598]: time="2026-01-27T12:53:56.948603232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 27 12:53:56.954284 containerd[1598]: time="2026-01-27T12:53:56.954242850Z" level=info msg="CreateContainer within sandbox \"c7bf6c9f00bee07cb3ee5c9f253d6f464abd89d59e0c89ba528b7ca3cda33fbd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 27 12:53:56.966330 containerd[1598]: time="2026-01-27T12:53:56.966230617Z" level=info msg="Container 82abb4174575ad9ce1a8bee6b1360f9086c1e4312c66d62d912a88c7295704f0: CDI devices from CRI Config.CDIDevices: []" Jan 27 12:53:56.977525 containerd[1598]: time="2026-01-27T12:53:56.977424462Z" level=info msg="CreateContainer within sandbox \"c7bf6c9f00bee07cb3ee5c9f253d6f464abd89d59e0c89ba528b7ca3cda33fbd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"82abb4174575ad9ce1a8bee6b1360f9086c1e4312c66d62d912a88c7295704f0\"" Jan 27 12:53:56.980158 containerd[1598]: time="2026-01-27T12:53:56.979765766Z" level=info msg="StartContainer for \"82abb4174575ad9ce1a8bee6b1360f9086c1e4312c66d62d912a88c7295704f0\"" Jan 27 12:53:56.982154 containerd[1598]: time="2026-01-27T12:53:56.981888525Z" level=info msg="connecting to shim 82abb4174575ad9ce1a8bee6b1360f9086c1e4312c66d62d912a88c7295704f0" address="unix:///run/containerd/s/08035aa0bf85a400f1a0a21e3f7f998f097e96b6e69a66983a596e5d4d20918c" protocol=ttrpc version=3 Jan 27 12:53:57.024344 systemd[1]: Started cri-containerd-82abb4174575ad9ce1a8bee6b1360f9086c1e4312c66d62d912a88c7295704f0.scope - libcontainer container 82abb4174575ad9ce1a8bee6b1360f9086c1e4312c66d62d912a88c7295704f0. Jan 27 12:53:57.094852 kubelet[2768]: E0127 12:53:57.094599 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:57.109000 audit: BPF prog-id=166 op=LOAD Jan 27 12:53:57.109000 audit[3431]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3307 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:57.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832616262343137343537356164396365316138626565366231333630 Jan 27 12:53:57.109000 audit: BPF prog-id=167 op=LOAD Jan 27 12:53:57.109000 audit[3431]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3307 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:57.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832616262343137343537356164396365316138626565366231333630 Jan 27 12:53:57.109000 audit: BPF prog-id=167 op=UNLOAD Jan 27 12:53:57.109000 audit[3431]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:57.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832616262343137343537356164396365316138626565366231333630 Jan 27 12:53:57.109000 audit: BPF prog-id=166 op=UNLOAD Jan 27 12:53:57.109000 audit[3431]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:57.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832616262343137343537356164396365316138626565366231333630 Jan 27 12:53:57.109000 audit: BPF prog-id=168 op=LOAD Jan 27 12:53:57.109000 audit[3431]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3307 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:53:57.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832616262343137343537356164396365316138626565366231333630 Jan 27 12:53:57.157724 containerd[1598]: time="2026-01-27T12:53:57.157609491Z" level=info msg="StartContainer for \"82abb4174575ad9ce1a8bee6b1360f9086c1e4312c66d62d912a88c7295704f0\" returns successfully" Jan 27 12:53:57.194964 kubelet[2768]: E0127 12:53:57.194330 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:57.194964 kubelet[2768]: W0127 12:53:57.194607 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:57.197148 kubelet[2768]: E0127 12:53:57.196451 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:57.199362 kubelet[2768]: E0127 12:53:57.199244 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:57.199362 kubelet[2768]: W0127 12:53:57.199264 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:57.199811 kubelet[2768]: E0127 12:53:57.199599 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:57.201600 kubelet[2768]: E0127 12:53:57.201581 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:57.201844 kubelet[2768]: W0127 12:53:57.201668 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:57.202064 kubelet[2768]: E0127 12:53:57.202048 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:57.202818 kubelet[2768]: E0127 12:53:57.202802 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:57.203804 kubelet[2768]: W0127 12:53:57.203352 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:57.204007 kubelet[2768]: E0127 12:53:57.203990 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:57.204797 kubelet[2768]: E0127 12:53:57.204782 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:57.205307 kubelet[2768]: W0127 12:53:57.205139 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:57.205307 kubelet[2768]: E0127 12:53:57.205158 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:57.206884 kubelet[2768]: E0127 12:53:57.206656 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:57.206884 kubelet[2768]: W0127 12:53:57.206719 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:57.206884 kubelet[2768]: E0127 12:53:57.206736 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:57.208454 kubelet[2768]: E0127 12:53:57.208239 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:57.209797 kubelet[2768]: W0127 12:53:57.209170 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:57.209797 kubelet[2768]: E0127 12:53:57.209187 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:57.209797 kubelet[2768]: E0127 12:53:57.209528 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:57.209797 kubelet[2768]: W0127 12:53:57.209539 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:57.209797 kubelet[2768]: E0127 12:53:57.209553 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:57.212136 kubelet[2768]: E0127 12:53:57.212119 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:57.212136 kubelet[2768]: W0127 12:53:57.212132 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:57.213137 kubelet[2768]: E0127 12:53:57.212145 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:57.214116 kubelet[2768]: E0127 12:53:57.214050 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:57.214116 kubelet[2768]: W0127 12:53:57.214112 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:57.214359 kubelet[2768]: E0127 12:53:57.214126 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:57.215846 kubelet[2768]: E0127 12:53:57.215757 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:57.215846 kubelet[2768]: W0127 12:53:57.215771 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:57.215846 kubelet[2768]: E0127 12:53:57.215783 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:57.216635 kubelet[2768]: E0127 12:53:57.216235 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:57.216635 kubelet[2768]: W0127 12:53:57.216249 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:57.216635 kubelet[2768]: E0127 12:53:57.216261 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:57.217240 kubelet[2768]: E0127 12:53:57.217205 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:57.217240 kubelet[2768]: W0127 12:53:57.217220 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:57.217240 kubelet[2768]: E0127 12:53:57.217233 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:57.218107 kubelet[2768]: E0127 12:53:57.218029 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:57.218107 kubelet[2768]: W0127 12:53:57.218086 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:57.218107 kubelet[2768]: E0127 12:53:57.218100 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:57.219752 kubelet[2768]: E0127 12:53:57.219588 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 12:53:57.219752 kubelet[2768]: W0127 12:53:57.219650 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 12:53:57.219752 kubelet[2768]: E0127 12:53:57.219666 2768 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 12:53:57.230883 systemd[1]: cri-containerd-82abb4174575ad9ce1a8bee6b1360f9086c1e4312c66d62d912a88c7295704f0.scope: Deactivated successfully. Jan 27 12:53:57.235000 audit: BPF prog-id=168 op=UNLOAD Jan 27 12:53:57.238132 containerd[1598]: time="2026-01-27T12:53:57.238048513Z" level=info msg="received container exit event container_id:\"82abb4174575ad9ce1a8bee6b1360f9086c1e4312c66d62d912a88c7295704f0\" id:\"82abb4174575ad9ce1a8bee6b1360f9086c1e4312c66d62d912a88c7295704f0\" pid:3446 exited_at:{seconds:1769518437 nanos:237376200}" Jan 27 12:53:57.986947 kubelet[2768]: E0127 12:53:57.986805 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5vwvj" podUID="6af69036-827e-49bb-8e7c-3940b856830f" Jan 27 12:53:58.104366 kubelet[2768]: I0127 12:53:58.104158 2768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 12:53:58.106284 kubelet[2768]: E0127 12:53:58.104499 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:58.106284 kubelet[2768]: E0127 12:53:58.104730 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:53:58.106754 containerd[1598]: time="2026-01-27T12:53:58.106656748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 27 12:53:58.128114 kubelet[2768]: I0127 12:53:58.127268 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5c59b88c84-pn4cl" podStartSLOduration=3.268052639 podStartE2EDuration="5.127249807s" podCreationTimestamp="2026-01-27 12:53:53 +0000 UTC" firstStartedPulling="2026-01-27 12:53:54.441660413 +0000 UTC m=+21.615570844" lastFinishedPulling="2026-01-27 12:53:56.30085758 +0000 UTC m=+23.474768012" observedRunningTime="2026-01-27 12:53:57.120523015 +0000 UTC m=+24.294433486" watchObservedRunningTime="2026-01-27 12:53:58.127249807 +0000 UTC m=+25.301160238" Jan 27 12:53:59.986640 kubelet[2768]: E0127 12:53:59.986524 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5vwvj" podUID="6af69036-827e-49bb-8e7c-3940b856830f" Jan 27 12:54:00.417937 containerd[1598]: time="2026-01-27T12:54:00.417857460Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:54:00.419342 containerd[1598]: time="2026-01-27T12:54:00.419150283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 27 12:54:00.420998 containerd[1598]: time="2026-01-27T12:54:00.420786183Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:54:00.423889 containerd[1598]: time="2026-01-27T12:54:00.423832035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:54:00.424409 containerd[1598]: time="2026-01-27T12:54:00.424319597Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.317558123s" Jan 27 12:54:00.424409 containerd[1598]: time="2026-01-27T12:54:00.424375802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 27 12:54:00.431873 containerd[1598]: time="2026-01-27T12:54:00.431835626Z" level=info msg="CreateContainer within sandbox \"c7bf6c9f00bee07cb3ee5c9f253d6f464abd89d59e0c89ba528b7ca3cda33fbd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 27 12:54:00.444418 containerd[1598]: time="2026-01-27T12:54:00.444324341Z" level=info msg="Container b32db4a4ae4ba95d32048d5d3c7ee0437b834838887d2d843451e90ec98cf939: CDI devices from CRI Config.CDIDevices: []" Jan 27 12:54:00.457046 containerd[1598]: time="2026-01-27T12:54:00.456836804Z" level=info msg="CreateContainer within sandbox \"c7bf6c9f00bee07cb3ee5c9f253d6f464abd89d59e0c89ba528b7ca3cda33fbd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b32db4a4ae4ba95d32048d5d3c7ee0437b834838887d2d843451e90ec98cf939\"" Jan 27 12:54:00.458867 containerd[1598]: time="2026-01-27T12:54:00.458785156Z" level=info msg="StartContainer for \"b32db4a4ae4ba95d32048d5d3c7ee0437b834838887d2d843451e90ec98cf939\"" Jan 27 12:54:00.461259 containerd[1598]: time="2026-01-27T12:54:00.461168313Z" level=info msg="connecting to shim b32db4a4ae4ba95d32048d5d3c7ee0437b834838887d2d843451e90ec98cf939" address="unix:///run/containerd/s/08035aa0bf85a400f1a0a21e3f7f998f097e96b6e69a66983a596e5d4d20918c" protocol=ttrpc version=3 Jan 27 12:54:00.495144 systemd[1]: Started cri-containerd-b32db4a4ae4ba95d32048d5d3c7ee0437b834838887d2d843451e90ec98cf939.scope - libcontainer container b32db4a4ae4ba95d32048d5d3c7ee0437b834838887d2d843451e90ec98cf939. Jan 27 12:54:00.575000 audit: BPF prog-id=169 op=LOAD Jan 27 12:54:00.579294 kernel: kauditd_printk_skb: 78 callbacks suppressed Jan 27 12:54:00.579397 kernel: audit: type=1334 audit(1769518440.575:558): prog-id=169 op=LOAD Jan 27 12:54:00.575000 audit[3512]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3307 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:00.594822 kernel: audit: type=1300 audit(1769518440.575:558): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3307 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:00.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233326462346134616534626139356433323034386435643363376565 Jan 27 12:54:00.607788 kernel: audit: type=1327 audit(1769518440.575:558): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233326462346134616534626139356433323034386435643363376565 Jan 27 12:54:00.607846 kernel: audit: type=1334 audit(1769518440.575:559): prog-id=170 op=LOAD Jan 27 12:54:00.575000 audit: BPF prog-id=170 op=LOAD Jan 27 12:54:00.575000 audit[3512]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3307 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:00.624536 kernel: audit: type=1300 audit(1769518440.575:559): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3307 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:00.624581 kernel: audit: type=1327 audit(1769518440.575:559): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233326462346134616534626139356433323034386435643363376565 Jan 27 12:54:00.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233326462346134616534626139356433323034386435643363376565 Jan 27 12:54:00.575000 audit: BPF prog-id=170 op=UNLOAD Jan 27 12:54:00.644980 kernel: audit: type=1334 audit(1769518440.575:560): prog-id=170 op=UNLOAD Jan 27 12:54:00.645176 kernel: audit: type=1300 audit(1769518440.575:560): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:00.575000 audit[3512]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:00.652199 containerd[1598]: time="2026-01-27T12:54:00.652080097Z" level=info msg="StartContainer for \"b32db4a4ae4ba95d32048d5d3c7ee0437b834838887d2d843451e90ec98cf939\" returns successfully" Jan 27 12:54:00.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233326462346134616534626139356433323034386435643363376565 Jan 27 12:54:00.674273 kernel: audit: type=1327 audit(1769518440.575:560): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233326462346134616534626139356433323034386435643363376565 Jan 27 12:54:00.674346 kernel: audit: type=1334 audit(1769518440.576:561): prog-id=169 op=UNLOAD Jan 27 12:54:00.576000 audit: BPF prog-id=169 op=UNLOAD Jan 27 12:54:00.576000 audit[3512]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:00.576000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233326462346134616534626139356433323034386435643363376565 Jan 27 12:54:00.576000 audit: BPF prog-id=171 op=LOAD Jan 27 12:54:00.576000 audit[3512]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3307 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:00.576000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233326462346134616534626139356433323034386435643363376565 Jan 27 12:54:01.124294 kubelet[2768]: E0127 12:54:01.124224 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:54:01.360831 systemd[1]: cri-containerd-b32db4a4ae4ba95d32048d5d3c7ee0437b834838887d2d843451e90ec98cf939.scope: Deactivated successfully. Jan 27 12:54:01.362611 systemd[1]: cri-containerd-b32db4a4ae4ba95d32048d5d3c7ee0437b834838887d2d843451e90ec98cf939.scope: Consumed 790ms CPU time, 177M memory peak, 5.6M read from disk, 171.3M written to disk. Jan 27 12:54:01.366000 audit: BPF prog-id=171 op=UNLOAD Jan 27 12:54:01.369828 containerd[1598]: time="2026-01-27T12:54:01.369048798Z" level=info msg="received container exit event container_id:\"b32db4a4ae4ba95d32048d5d3c7ee0437b834838887d2d843451e90ec98cf939\" id:\"b32db4a4ae4ba95d32048d5d3c7ee0437b834838887d2d843451e90ec98cf939\" pid:3525 exited_at:{seconds:1769518441 nanos:362751019}" Jan 27 12:54:01.425127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b32db4a4ae4ba95d32048d5d3c7ee0437b834838887d2d843451e90ec98cf939-rootfs.mount: Deactivated successfully. Jan 27 12:54:01.459838 kubelet[2768]: I0127 12:54:01.459786 2768 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 27 12:54:01.520281 systemd[1]: Created slice kubepods-besteffort-pod545117a9_f0f3_4779_b9f4_2ae365c2cf4f.slice - libcontainer container kubepods-besteffort-pod545117a9_f0f3_4779_b9f4_2ae365c2cf4f.slice. Jan 27 12:54:01.536800 systemd[1]: Created slice kubepods-besteffort-pode6d3c258_6f1e_4868_8f36_862014b4b2fc.slice - libcontainer container kubepods-besteffort-pode6d3c258_6f1e_4868_8f36_862014b4b2fc.slice. Jan 27 12:54:01.549481 systemd[1]: Created slice kubepods-burstable-pode9a3713f_f0ca_48fe_b261_15054e0b1d7d.slice - libcontainer container kubepods-burstable-pode9a3713f_f0ca_48fe_b261_15054e0b1d7d.slice. Jan 27 12:54:01.563273 systemd[1]: Created slice kubepods-besteffort-pod13a845a0_aaa5_4e80_8a2f_691163970ae8.slice - libcontainer container kubepods-besteffort-pod13a845a0_aaa5_4e80_8a2f_691163970ae8.slice. Jan 27 12:54:01.565337 kubelet[2768]: I0127 12:54:01.565158 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxrj6\" (UniqueName: \"kubernetes.io/projected/32d8681f-2b1f-4fad-bc6d-7656e61dae7d-kube-api-access-fxrj6\") pod \"calico-apiserver-6df48b7979-8w89r\" (UID: \"32d8681f-2b1f-4fad-bc6d-7656e61dae7d\") " pod="calico-apiserver/calico-apiserver-6df48b7979-8w89r" Jan 27 12:54:01.565337 kubelet[2768]: I0127 12:54:01.565224 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/13a845a0-aaa5-4e80-8a2f-691163970ae8-goldmane-key-pair\") pod \"goldmane-7c778bb748-mtm9p\" (UID: \"13a845a0-aaa5-4e80-8a2f-691163970ae8\") " pod="calico-system/goldmane-7c778bb748-mtm9p" Jan 27 12:54:01.565337 kubelet[2768]: I0127 12:54:01.565244 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e6d3c258-6f1e-4868-8f36-862014b4b2fc-calico-apiserver-certs\") pod \"calico-apiserver-6df48b7979-cgdx9\" (UID: \"e6d3c258-6f1e-4868-8f36-862014b4b2fc\") " pod="calico-apiserver/calico-apiserver-6df48b7979-cgdx9" Jan 27 12:54:01.565337 kubelet[2768]: I0127 12:54:01.565258 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zgww\" (UniqueName: \"kubernetes.io/projected/e9a3713f-f0ca-48fe-b261-15054e0b1d7d-kube-api-access-5zgww\") pod \"coredns-66bc5c9577-2flk2\" (UID: \"e9a3713f-f0ca-48fe-b261-15054e0b1d7d\") " pod="kube-system/coredns-66bc5c9577-2flk2" Jan 27 12:54:01.565337 kubelet[2768]: I0127 12:54:01.565273 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptwtn\" (UniqueName: \"kubernetes.io/projected/518046d9-b7bc-493b-96b2-44b9979317ed-kube-api-access-ptwtn\") pod \"calico-kube-controllers-5d95ff6778-flxqp\" (UID: \"518046d9-b7bc-493b-96b2-44b9979317ed\") " pod="calico-system/calico-kube-controllers-5d95ff6778-flxqp" Jan 27 12:54:01.566249 kubelet[2768]: I0127 12:54:01.565289 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/545117a9-f0f3-4779-b9f4-2ae365c2cf4f-whisker-backend-key-pair\") pod \"whisker-77b59c446d-74fh4\" (UID: \"545117a9-f0f3-4779-b9f4-2ae365c2cf4f\") " pod="calico-system/whisker-77b59c446d-74fh4" Jan 27 12:54:01.566249 kubelet[2768]: I0127 12:54:01.565302 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9a3713f-f0ca-48fe-b261-15054e0b1d7d-config-volume\") pod \"coredns-66bc5c9577-2flk2\" (UID: \"e9a3713f-f0ca-48fe-b261-15054e0b1d7d\") " pod="kube-system/coredns-66bc5c9577-2flk2" Jan 27 12:54:01.566249 kubelet[2768]: I0127 12:54:01.565316 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxb2j\" (UniqueName: \"kubernetes.io/projected/13a845a0-aaa5-4e80-8a2f-691163970ae8-kube-api-access-lxb2j\") pod \"goldmane-7c778bb748-mtm9p\" (UID: \"13a845a0-aaa5-4e80-8a2f-691163970ae8\") " pod="calico-system/goldmane-7c778bb748-mtm9p" Jan 27 12:54:01.566249 kubelet[2768]: I0127 12:54:01.565329 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/518046d9-b7bc-493b-96b2-44b9979317ed-tigera-ca-bundle\") pod \"calico-kube-controllers-5d95ff6778-flxqp\" (UID: \"518046d9-b7bc-493b-96b2-44b9979317ed\") " pod="calico-system/calico-kube-controllers-5d95ff6778-flxqp" Jan 27 12:54:01.566249 kubelet[2768]: I0127 12:54:01.565343 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/545117a9-f0f3-4779-b9f4-2ae365c2cf4f-whisker-ca-bundle\") pod \"whisker-77b59c446d-74fh4\" (UID: \"545117a9-f0f3-4779-b9f4-2ae365c2cf4f\") " pod="calico-system/whisker-77b59c446d-74fh4" Jan 27 12:54:01.566450 kubelet[2768]: I0127 12:54:01.565356 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpvbr\" (UniqueName: \"kubernetes.io/projected/da6ec3a8-0f71-43a9-9c60-07db37f3df34-kube-api-access-tpvbr\") pod \"coredns-66bc5c9577-rj6rv\" (UID: \"da6ec3a8-0f71-43a9-9c60-07db37f3df34\") " pod="kube-system/coredns-66bc5c9577-rj6rv" Jan 27 12:54:01.566450 kubelet[2768]: I0127 12:54:01.565369 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrxfc\" (UniqueName: \"kubernetes.io/projected/e6d3c258-6f1e-4868-8f36-862014b4b2fc-kube-api-access-wrxfc\") pod \"calico-apiserver-6df48b7979-cgdx9\" (UID: \"e6d3c258-6f1e-4868-8f36-862014b4b2fc\") " pod="calico-apiserver/calico-apiserver-6df48b7979-cgdx9" Jan 27 12:54:01.566450 kubelet[2768]: I0127 12:54:01.565387 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/32d8681f-2b1f-4fad-bc6d-7656e61dae7d-calico-apiserver-certs\") pod \"calico-apiserver-6df48b7979-8w89r\" (UID: \"32d8681f-2b1f-4fad-bc6d-7656e61dae7d\") " pod="calico-apiserver/calico-apiserver-6df48b7979-8w89r" Jan 27 12:54:01.566450 kubelet[2768]: I0127 12:54:01.565401 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4bd9\" (UniqueName: \"kubernetes.io/projected/545117a9-f0f3-4779-b9f4-2ae365c2cf4f-kube-api-access-g4bd9\") pod \"whisker-77b59c446d-74fh4\" (UID: \"545117a9-f0f3-4779-b9f4-2ae365c2cf4f\") " pod="calico-system/whisker-77b59c446d-74fh4" Jan 27 12:54:01.566450 kubelet[2768]: I0127 12:54:01.565414 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da6ec3a8-0f71-43a9-9c60-07db37f3df34-config-volume\") pod \"coredns-66bc5c9577-rj6rv\" (UID: \"da6ec3a8-0f71-43a9-9c60-07db37f3df34\") " pod="kube-system/coredns-66bc5c9577-rj6rv" Jan 27 12:54:01.566637 kubelet[2768]: I0127 12:54:01.565427 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13a845a0-aaa5-4e80-8a2f-691163970ae8-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-mtm9p\" (UID: \"13a845a0-aaa5-4e80-8a2f-691163970ae8\") " pod="calico-system/goldmane-7c778bb748-mtm9p" Jan 27 12:54:01.566637 kubelet[2768]: I0127 12:54:01.565444 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a845a0-aaa5-4e80-8a2f-691163970ae8-config\") pod \"goldmane-7c778bb748-mtm9p\" (UID: \"13a845a0-aaa5-4e80-8a2f-691163970ae8\") " pod="calico-system/goldmane-7c778bb748-mtm9p" Jan 27 12:54:01.572880 systemd[1]: Created slice kubepods-besteffort-pod32d8681f_2b1f_4fad_bc6d_7656e61dae7d.slice - libcontainer container kubepods-besteffort-pod32d8681f_2b1f_4fad_bc6d_7656e61dae7d.slice. Jan 27 12:54:01.583395 systemd[1]: Created slice kubepods-besteffort-pod518046d9_b7bc_493b_96b2_44b9979317ed.slice - libcontainer container kubepods-besteffort-pod518046d9_b7bc_493b_96b2_44b9979317ed.slice. Jan 27 12:54:01.593115 systemd[1]: Created slice kubepods-burstable-podda6ec3a8_0f71_43a9_9c60_07db37f3df34.slice - libcontainer container kubepods-burstable-podda6ec3a8_0f71_43a9_9c60_07db37f3df34.slice. Jan 27 12:54:01.831990 containerd[1598]: time="2026-01-27T12:54:01.831743595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77b59c446d-74fh4,Uid:545117a9-f0f3-4779-b9f4-2ae365c2cf4f,Namespace:calico-system,Attempt:0,}" Jan 27 12:54:01.846081 containerd[1598]: time="2026-01-27T12:54:01.845793143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df48b7979-cgdx9,Uid:e6d3c258-6f1e-4868-8f36-862014b4b2fc,Namespace:calico-apiserver,Attempt:0,}" Jan 27 12:54:01.859431 kubelet[2768]: E0127 12:54:01.859121 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:54:01.860837 containerd[1598]: time="2026-01-27T12:54:01.860584299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2flk2,Uid:e9a3713f-f0ca-48fe-b261-15054e0b1d7d,Namespace:kube-system,Attempt:0,}" Jan 27 12:54:01.872991 containerd[1598]: time="2026-01-27T12:54:01.872851413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mtm9p,Uid:13a845a0-aaa5-4e80-8a2f-691163970ae8,Namespace:calico-system,Attempt:0,}" Jan 27 12:54:01.885598 containerd[1598]: time="2026-01-27T12:54:01.885497200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df48b7979-8w89r,Uid:32d8681f-2b1f-4fad-bc6d-7656e61dae7d,Namespace:calico-apiserver,Attempt:0,}" Jan 27 12:54:01.895195 containerd[1598]: time="2026-01-27T12:54:01.895107655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d95ff6778-flxqp,Uid:518046d9-b7bc-493b-96b2-44b9979317ed,Namespace:calico-system,Attempt:0,}" Jan 27 12:54:01.903189 kubelet[2768]: E0127 12:54:01.903021 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:54:01.909757 containerd[1598]: time="2026-01-27T12:54:01.909420958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rj6rv,Uid:da6ec3a8-0f71-43a9-9c60-07db37f3df34,Namespace:kube-system,Attempt:0,}" Jan 27 12:54:01.995508 systemd[1]: Created slice kubepods-besteffort-pod6af69036_827e_49bb_8e7c_3940b856830f.slice - libcontainer container kubepods-besteffort-pod6af69036_827e_49bb_8e7c_3940b856830f.slice. Jan 27 12:54:02.004239 containerd[1598]: time="2026-01-27T12:54:02.004126466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5vwvj,Uid:6af69036-827e-49bb-8e7c-3940b856830f,Namespace:calico-system,Attempt:0,}" Jan 27 12:54:02.063521 containerd[1598]: time="2026-01-27T12:54:02.063471799Z" level=error msg="Failed to destroy network for sandbox \"72a34d7369d5f9e9f6013372bc003c9313ed3bcc2a64d6fae3902cff6a3e67a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 12:54:02.074256 containerd[1598]: time="2026-01-27T12:54:02.074170359Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rj6rv,Uid:da6ec3a8-0f71-43a9-9c60-07db37f3df34,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"72a34d7369d5f9e9f6013372bc003c9313ed3bcc2a64d6fae3902cff6a3e67a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 12:54:02.074619 containerd[1598]: time="2026-01-27T12:54:02.074504310Z" level=error msg="Failed to destroy network for sandbox \"9250a391978fd3eb66d601db7bfadc81b4281d69ae11f46822ae8dd0d626f690\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 12:54:02.075071 kubelet[2768]: E0127 12:54:02.074986 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72a34d7369d5f9e9f6013372bc003c9313ed3bcc2a64d6fae3902cff6a3e67a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 12:54:02.075129 kubelet[2768]: E0127 12:54:02.075096 2768 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72a34d7369d5f9e9f6013372bc003c9313ed3bcc2a64d6fae3902cff6a3e67a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-rj6rv" Jan 27 12:54:02.075129 kubelet[2768]: E0127 12:54:02.075116 2768 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72a34d7369d5f9e9f6013372bc003c9313ed3bcc2a64d6fae3902cff6a3e67a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-rj6rv" Jan 27 12:54:02.075277 kubelet[2768]: E0127 12:54:02.075222 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-rj6rv_kube-system(da6ec3a8-0f71-43a9-9c60-07db37f3df34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-rj6rv_kube-system(da6ec3a8-0f71-43a9-9c60-07db37f3df34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72a34d7369d5f9e9f6013372bc003c9313ed3bcc2a64d6fae3902cff6a3e67a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-rj6rv" podUID="da6ec3a8-0f71-43a9-9c60-07db37f3df34" Jan 27 12:54:02.076293 containerd[1598]: time="2026-01-27T12:54:02.076152014Z" level=error msg="Failed to destroy network for sandbox \"afd59d7235d0d6894ad13122bc5b0a4e0b24b0508290a36332c4351d52511b08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 12:54:02.077434 containerd[1598]: time="2026-01-27T12:54:02.076572942Z" level=error msg="Failed to destroy network for sandbox \"2fe0aa013425b94a8ed5671d58b809f30033d54c2baa526fbd33ef0ca0274654\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 12:54:02.079128 containerd[1598]: time="2026-01-27T12:54:02.078987324Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77b59c446d-74fh4,Uid:545117a9-f0f3-4779-b9f4-2ae365c2cf4f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9250a391978fd3eb66d601db7bfadc81b4281d69ae11f46822ae8dd0d626f690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 12:54:02.079924 kubelet[2768]: E0127 12:54:02.079802 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9250a391978fd3eb66d601db7bfadc81b4281d69ae11f46822ae8dd0d626f690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 12:54:02.079924 kubelet[2768]: E0127 12:54:02.079870 2768 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9250a391978fd3eb66d601db7bfadc81b4281d69ae11f46822ae8dd0d626f690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77b59c446d-74fh4" Jan 27 12:54:02.080403 kubelet[2768]: E0127 12:54:02.080014 2768 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9250a391978fd3eb66d601db7bfadc81b4281d69ae11f46822ae8dd0d626f690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77b59c446d-74fh4" Jan 27 12:54:02.080403 kubelet[2768]: E0127 12:54:02.080116 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-77b59c446d-74fh4_calico-system(545117a9-f0f3-4779-b9f4-2ae365c2cf4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-77b59c446d-74fh4_calico-system(545117a9-f0f3-4779-b9f4-2ae365c2cf4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9250a391978fd3eb66d601db7bfadc81b4281d69ae11f46822ae8dd0d626f690\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77b59c446d-74fh4" podUID="545117a9-f0f3-4779-b9f4-2ae365c2cf4f" Jan 27 12:54:02.084188 containerd[1598]: time="2026-01-27T12:54:02.083889150Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2flk2,Uid:e9a3713f-f0ca-48fe-b261-15054e0b1d7d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fe0aa013425b94a8ed5671d58b809f30033d54c2baa526fbd33ef0ca0274654\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 12:54:02.089292 containerd[1598]: time="2026-01-27T12:54:02.088737646Z" level=error msg="Failed to destroy network for sandbox \"5de67ab942fe11adf92751f02b42f78241c8b83489ccc0542d9d987b900bc993\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 12:54:02.090857 containerd[1598]: time="2026-01-27T12:54:02.089858369Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df48b7979-cgdx9,Uid:e6d3c258-6f1e-4868-8f36-862014b4b2fc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"afd59d7235d0d6894ad13122bc5b0a4e0b24b0508290a36332c4351d52511b08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 12:54:02.091480 kubelet[2768]: E0127 12:54:02.090133 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afd59d7235d0d6894ad13122bc5b0a4e0b24b0508290a36332c4351d52511b08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 12:54:02.091480 kubelet[2768]: E0127 12:54:02.090168 2768 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afd59d7235d0d6894ad13122bc5b0a4e0b24b0508290a36332c4351d52511b08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6df48b7979-cgdx9" Jan 27 12:54:02.091480 kubelet[2768]: E0127 12:54:02.090184 2768 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afd59d7235d0d6894ad13122bc5b0a4e0b24b0508290a36332c4351d52511b08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6df48b7979-cgdx9" Jan 27 12:54:02.092861 kubelet[2768]: E0127 12:54:02.090256 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6df48b7979-cgdx9_calico-apiserver(e6d3c258-6f1e-4868-8f36-862014b4b2fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6df48b7979-cgdx9_calico-apiserver(e6d3c258-6f1e-4868-8f36-862014b4b2fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afd59d7235d0d6894ad13122bc5b0a4e0b24b0508290a36332c4351d52511b08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6df48b7979-cgdx9" podUID="e6d3c258-6f1e-4868-8f36-862014b4b2fc" Jan 27 12:54:02.095026 containerd[1598]: time="2026-01-27T12:54:02.093852068Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mtm9p,Uid:13a845a0-aaa5-4e80-8a2f-691163970ae8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5de67ab942fe11adf92751f02b42f78241c8b83489ccc0542d9d987b900bc993\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 12:54:02.095135 kubelet[2768]: E0127 12:54:02.094050 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5de67ab942fe11adf92751f02b42f78241c8b83489ccc0542d9d987b900bc993\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 12:54:02.095135 kubelet[2768]: E0127 12:54:02.094088 2768 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5de67ab942fe11adf92751f02b42f78241c8b83489ccc0542d9d987b900bc993\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-mtm9p" Jan 27 12:54:02.095135 kubelet[2768]: E0127 12:54:02.094106 2768 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5de67ab942fe11adf92751f02b42f78241c8b83489ccc0542d9d987b900bc993\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-mtm9p" Jan 27 12:54:02.095213 kubelet[2768]: E0127 12:54:02.094139 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-mtm9p_calico-system(13a845a0-aaa5-4e80-8a2f-691163970ae8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-mtm9p_calico-system(13a845a0-aaa5-4e80-8a2f-691163970ae8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5de67ab942fe11adf92751f02b42f78241c8b83489ccc0542d9d987b900bc993\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-mtm9p" podUID="13a845a0-aaa5-4e80-8a2f-691163970ae8" Jan 27 12:54:02.099231 kubelet[2768]: E0127 12:54:02.099164 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fe0aa013425b94a8ed5671d58b809f30033d54c2baa526fbd33ef0ca0274654\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 12:54:02.099294 kubelet[2768]: E0127 12:54:02.099240 2768 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fe0aa013425b94a8ed5671d58b809f30033d54c2baa526fbd33ef0ca0274654\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2flk2" Jan 27 12:54:02.099294 kubelet[2768]: E0127 12:54:02.099257 2768 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fe0aa013425b94a8ed5671d58b809f30033d54c2baa526fbd33ef0ca0274654\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2flk2" Jan 27 12:54:02.099351 kubelet[2768]: E0127 12:54:02.099295 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-2flk2_kube-system(e9a3713f-f0ca-48fe-b261-15054e0b1d7d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-2flk2_kube-system(e9a3713f-f0ca-48fe-b261-15054e0b1d7d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fe0aa013425b94a8ed5671d58b809f30033d54c2baa526fbd33ef0ca0274654\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-2flk2" podUID="e9a3713f-f0ca-48fe-b261-15054e0b1d7d" Jan 27 12:54:02.109132 containerd[1598]: time="2026-01-27T12:54:02.109002075Z" level=error msg="Failed to destroy network for sandbox \"65b37151f0cf23ac922ac24b357ec5f7ad329846b5e6fdf427dcc5f7ffd58e62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 12:54:02.113137 containerd[1598]: time="2026-01-27T12:54:02.112986463Z" level=error msg="Failed to destroy network for sandbox \"3af23c18ad6e7b130f69661a189a14fe9f2c5705b353d1b62e87faebfaaccf94\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 12:54:02.114018 containerd[1598]: time="2026-01-27T12:54:02.113989952Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d95ff6778-flxqp,Uid:518046d9-b7bc-493b-96b2-44b9979317ed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"65b37151f0cf23ac922ac24b357ec5f7ad329846b5e6fdf427dcc5f7ffd58e62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 12:54:02.115351 kubelet[2768]: E0127 12:54:02.114840 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65b37151f0cf23ac922ac24b357ec5f7ad329846b5e6fdf427dcc5f7ffd58e62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 12:54:02.115351 kubelet[2768]: E0127 12:54:02.115047 2768 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65b37151f0cf23ac922ac24b357ec5f7ad329846b5e6fdf427dcc5f7ffd58e62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d95ff6778-flxqp" Jan 27 12:54:02.115351 kubelet[2768]: E0127 12:54:02.115152 2768 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65b37151f0cf23ac922ac24b357ec5f7ad329846b5e6fdf427dcc5f7ffd58e62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d95ff6778-flxqp" Jan 27 12:54:02.116095 kubelet[2768]: E0127 12:54:02.115305 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d95ff6778-flxqp_calico-system(518046d9-b7bc-493b-96b2-44b9979317ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d95ff6778-flxqp_calico-system(518046d9-b7bc-493b-96b2-44b9979317ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"65b37151f0cf23ac922ac24b357ec5f7ad329846b5e6fdf427dcc5f7ffd58e62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d95ff6778-flxqp" podUID="518046d9-b7bc-493b-96b2-44b9979317ed" Jan 27 12:54:02.119446 containerd[1598]: time="2026-01-27T12:54:02.119364099Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df48b7979-8w89r,Uid:32d8681f-2b1f-4fad-bc6d-7656e61dae7d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3af23c18ad6e7b130f69661a189a14fe9f2c5705b353d1b62e87faebfaaccf94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 12:54:02.120058 kubelet[2768]: E0127 12:54:02.120005 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3af23c18ad6e7b130f69661a189a14fe9f2c5705b353d1b62e87faebfaaccf94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 12:54:02.120114 kubelet[2768]: E0127 12:54:02.120065 2768 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3af23c18ad6e7b130f69661a189a14fe9f2c5705b353d1b62e87faebfaaccf94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6df48b7979-8w89r" Jan 27 12:54:02.120114 kubelet[2768]: E0127 12:54:02.120081 2768 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3af23c18ad6e7b130f69661a189a14fe9f2c5705b353d1b62e87faebfaaccf94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6df48b7979-8w89r" Jan 27 12:54:02.120201 kubelet[2768]: E0127 12:54:02.120115 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6df48b7979-8w89r_calico-apiserver(32d8681f-2b1f-4fad-bc6d-7656e61dae7d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6df48b7979-8w89r_calico-apiserver(32d8681f-2b1f-4fad-bc6d-7656e61dae7d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3af23c18ad6e7b130f69661a189a14fe9f2c5705b353d1b62e87faebfaaccf94\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6df48b7979-8w89r" podUID="32d8681f-2b1f-4fad-bc6d-7656e61dae7d" Jan 27 12:54:02.132603 kubelet[2768]: E0127 12:54:02.132579 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:54:02.137501 containerd[1598]: time="2026-01-27T12:54:02.136306770Z" level=error msg="Failed to destroy network for sandbox \"6fca98a3cc8f3fc98b5a7ebbd6b29de63fb9e2fdbf20740137236a1023cc0368\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 12:54:02.139113 containerd[1598]: time="2026-01-27T12:54:02.138299879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 27 12:54:02.144975 containerd[1598]: time="2026-01-27T12:54:02.144798304Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5vwvj,Uid:6af69036-827e-49bb-8e7c-3940b856830f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fca98a3cc8f3fc98b5a7ebbd6b29de63fb9e2fdbf20740137236a1023cc0368\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 12:54:02.145252 kubelet[2768]: E0127 12:54:02.145186 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fca98a3cc8f3fc98b5a7ebbd6b29de63fb9e2fdbf20740137236a1023cc0368\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 12:54:02.145252 kubelet[2768]: E0127 12:54:02.145250 2768 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fca98a3cc8f3fc98b5a7ebbd6b29de63fb9e2fdbf20740137236a1023cc0368\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5vwvj" Jan 27 12:54:02.145252 kubelet[2768]: E0127 12:54:02.145267 2768 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fca98a3cc8f3fc98b5a7ebbd6b29de63fb9e2fdbf20740137236a1023cc0368\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5vwvj" Jan 27 12:54:02.145556 kubelet[2768]: E0127 12:54:02.145302 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5vwvj_calico-system(6af69036-827e-49bb-8e7c-3940b856830f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5vwvj_calico-system(6af69036-827e-49bb-8e7c-3940b856830f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6fca98a3cc8f3fc98b5a7ebbd6b29de63fb9e2fdbf20740137236a1023cc0368\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5vwvj" podUID="6af69036-827e-49bb-8e7c-3940b856830f" Jan 27 12:54:08.163981 kubelet[2768]: I0127 12:54:08.163842 2768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 12:54:08.165181 kubelet[2768]: E0127 12:54:08.164567 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:54:08.192632 kubelet[2768]: E0127 12:54:08.192556 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:54:08.238000 audit[3832]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:54:08.241924 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 27 12:54:08.242094 kernel: audit: type=1325 audit(1769518448.238:564): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:54:08.263214 kernel: audit: type=1300 audit(1769518448.238:564): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe7e07b7b0 a2=0 a3=7ffe7e07b79c items=0 ppid=2925 pid=3832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:08.238000 audit[3832]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe7e07b7b0 a2=0 a3=7ffe7e07b79c items=0 ppid=2925 pid=3832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:08.238000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:54:08.269129 kernel: audit: type=1327 audit(1769518448.238:564): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:54:08.269188 kernel: audit: type=1325 audit(1769518448.264:565): table=nat:116 family=2 entries=19 op=nft_register_chain pid=3832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:54:08.264000 audit[3832]: NETFILTER_CFG table=nat:116 family=2 entries=19 op=nft_register_chain pid=3832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:54:08.288954 kernel: audit: type=1300 audit(1769518448.264:565): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe7e07b7b0 a2=0 a3=7ffe7e07b79c items=0 ppid=2925 pid=3832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:08.264000 audit[3832]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe7e07b7b0 a2=0 a3=7ffe7e07b79c items=0 ppid=2925 pid=3832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:08.264000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:54:08.295956 kernel: audit: type=1327 audit(1769518448.264:565): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:54:10.084991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3121453621.mount: Deactivated successfully. Jan 27 12:54:10.295984 containerd[1598]: time="2026-01-27T12:54:10.295624168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:54:10.297453 containerd[1598]: time="2026-01-27T12:54:10.297366350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 27 12:54:10.299051 containerd[1598]: time="2026-01-27T12:54:10.298998055Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:54:10.302058 containerd[1598]: time="2026-01-27T12:54:10.301796879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 12:54:10.304861 containerd[1598]: time="2026-01-27T12:54:10.304659211Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.165636676s" Jan 27 12:54:10.304861 containerd[1598]: time="2026-01-27T12:54:10.304782681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 27 12:54:10.331198 containerd[1598]: time="2026-01-27T12:54:10.330990358Z" level=info msg="CreateContainer within sandbox \"c7bf6c9f00bee07cb3ee5c9f253d6f464abd89d59e0c89ba528b7ca3cda33fbd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 27 12:54:10.345075 containerd[1598]: time="2026-01-27T12:54:10.344152597Z" level=info msg="Container 4261353e72416ce6912ecb1adac417cddd10366e5dbfa1b66ec11681e55716c6: CDI devices from CRI Config.CDIDevices: []" Jan 27 12:54:10.361591 containerd[1598]: time="2026-01-27T12:54:10.361429469Z" level=info msg="CreateContainer within sandbox \"c7bf6c9f00bee07cb3ee5c9f253d6f464abd89d59e0c89ba528b7ca3cda33fbd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4261353e72416ce6912ecb1adac417cddd10366e5dbfa1b66ec11681e55716c6\"" Jan 27 12:54:10.362423 containerd[1598]: time="2026-01-27T12:54:10.362343152Z" level=info msg="StartContainer for \"4261353e72416ce6912ecb1adac417cddd10366e5dbfa1b66ec11681e55716c6\"" Jan 27 12:54:10.364201 containerd[1598]: time="2026-01-27T12:54:10.363817914Z" level=info msg="connecting to shim 4261353e72416ce6912ecb1adac417cddd10366e5dbfa1b66ec11681e55716c6" address="unix:///run/containerd/s/08035aa0bf85a400f1a0a21e3f7f998f097e96b6e69a66983a596e5d4d20918c" protocol=ttrpc version=3 Jan 27 12:54:10.403260 systemd[1]: Started cri-containerd-4261353e72416ce6912ecb1adac417cddd10366e5dbfa1b66ec11681e55716c6.scope - libcontainer container 4261353e72416ce6912ecb1adac417cddd10366e5dbfa1b66ec11681e55716c6. Jan 27 12:54:10.494000 audit: BPF prog-id=172 op=LOAD Jan 27 12:54:10.494000 audit[3835]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3307 pid=3835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:10.500132 kernel: audit: type=1334 audit(1769518450.494:566): prog-id=172 op=LOAD Jan 27 12:54:10.500189 kernel: audit: type=1300 audit(1769518450.494:566): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3307 pid=3835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:10.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432363133353365373234313663653639313265636231616461633431 Jan 27 12:54:10.538233 kernel: audit: type=1327 audit(1769518450.494:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432363133353365373234313663653639313265636231616461633431 Jan 27 12:54:10.538441 kernel: audit: type=1334 audit(1769518450.494:567): prog-id=173 op=LOAD Jan 27 12:54:10.494000 audit: BPF prog-id=173 op=LOAD Jan 27 12:54:10.494000 audit[3835]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3307 pid=3835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:10.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432363133353365373234313663653639313265636231616461633431 Jan 27 12:54:10.494000 audit: BPF prog-id=173 op=UNLOAD Jan 27 12:54:10.494000 audit[3835]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:10.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432363133353365373234313663653639313265636231616461633431 Jan 27 12:54:10.494000 audit: BPF prog-id=172 op=UNLOAD Jan 27 12:54:10.494000 audit[3835]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:10.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432363133353365373234313663653639313265636231616461633431 Jan 27 12:54:10.494000 audit: BPF prog-id=174 op=LOAD Jan 27 12:54:10.494000 audit[3835]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3307 pid=3835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:10.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432363133353365373234313663653639313265636231616461633431 Jan 27 12:54:10.574493 containerd[1598]: time="2026-01-27T12:54:10.574448608Z" level=info msg="StartContainer for \"4261353e72416ce6912ecb1adac417cddd10366e5dbfa1b66ec11681e55716c6\" returns successfully" Jan 27 12:54:10.740757 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 27 12:54:10.741109 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 27 12:54:11.052208 kubelet[2768]: I0127 12:54:11.052060 2768 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/545117a9-f0f3-4779-b9f4-2ae365c2cf4f-whisker-backend-key-pair\") pod \"545117a9-f0f3-4779-b9f4-2ae365c2cf4f\" (UID: \"545117a9-f0f3-4779-b9f4-2ae365c2cf4f\") " Jan 27 12:54:11.053814 kubelet[2768]: I0127 12:54:11.053215 2768 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/545117a9-f0f3-4779-b9f4-2ae365c2cf4f-whisker-ca-bundle\") pod \"545117a9-f0f3-4779-b9f4-2ae365c2cf4f\" (UID: \"545117a9-f0f3-4779-b9f4-2ae365c2cf4f\") " Jan 27 12:54:11.053814 kubelet[2768]: I0127 12:54:11.053381 2768 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4bd9\" (UniqueName: \"kubernetes.io/projected/545117a9-f0f3-4779-b9f4-2ae365c2cf4f-kube-api-access-g4bd9\") pod \"545117a9-f0f3-4779-b9f4-2ae365c2cf4f\" (UID: \"545117a9-f0f3-4779-b9f4-2ae365c2cf4f\") " Jan 27 12:54:11.054523 kubelet[2768]: I0127 12:54:11.054439 2768 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/545117a9-f0f3-4779-b9f4-2ae365c2cf4f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "545117a9-f0f3-4779-b9f4-2ae365c2cf4f" (UID: "545117a9-f0f3-4779-b9f4-2ae365c2cf4f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 27 12:54:11.061096 kubelet[2768]: I0127 12:54:11.060844 2768 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/545117a9-f0f3-4779-b9f4-2ae365c2cf4f-kube-api-access-g4bd9" (OuterVolumeSpecName: "kube-api-access-g4bd9") pod "545117a9-f0f3-4779-b9f4-2ae365c2cf4f" (UID: "545117a9-f0f3-4779-b9f4-2ae365c2cf4f"). InnerVolumeSpecName "kube-api-access-g4bd9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 27 12:54:11.061096 kubelet[2768]: I0127 12:54:11.060869 2768 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/545117a9-f0f3-4779-b9f4-2ae365c2cf4f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "545117a9-f0f3-4779-b9f4-2ae365c2cf4f" (UID: "545117a9-f0f3-4779-b9f4-2ae365c2cf4f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 27 12:54:11.086441 systemd[1]: var-lib-kubelet-pods-545117a9\x2df0f3\x2d4779\x2db9f4\x2d2ae365c2cf4f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg4bd9.mount: Deactivated successfully. Jan 27 12:54:11.086598 systemd[1]: var-lib-kubelet-pods-545117a9\x2df0f3\x2d4779\x2db9f4\x2d2ae365c2cf4f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 27 12:54:11.155142 kubelet[2768]: I0127 12:54:11.154993 2768 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/545117a9-f0f3-4779-b9f4-2ae365c2cf4f-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 27 12:54:11.155142 kubelet[2768]: I0127 12:54:11.155086 2768 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/545117a9-f0f3-4779-b9f4-2ae365c2cf4f-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 27 12:54:11.155142 kubelet[2768]: I0127 12:54:11.155102 2768 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g4bd9\" (UniqueName: \"kubernetes.io/projected/545117a9-f0f3-4779-b9f4-2ae365c2cf4f-kube-api-access-g4bd9\") on node \"localhost\" DevicePath \"\"" Jan 27 12:54:11.210561 kubelet[2768]: E0127 12:54:11.209745 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:54:11.223190 systemd[1]: Removed slice kubepods-besteffort-pod545117a9_f0f3_4779_b9f4_2ae365c2cf4f.slice - libcontainer container kubepods-besteffort-pod545117a9_f0f3_4779_b9f4_2ae365c2cf4f.slice. Jan 27 12:54:11.292989 kubelet[2768]: I0127 12:54:11.292583 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8lxcz" podStartSLOduration=2.51572626 podStartE2EDuration="18.292455144s" podCreationTimestamp="2026-01-27 12:53:53 +0000 UTC" firstStartedPulling="2026-01-27 12:53:54.529134712 +0000 UTC m=+21.703045143" lastFinishedPulling="2026-01-27 12:54:10.305863596 +0000 UTC m=+37.479774027" observedRunningTime="2026-01-27 12:54:11.271319147 +0000 UTC m=+38.445229619" watchObservedRunningTime="2026-01-27 12:54:11.292455144 +0000 UTC m=+38.466365576" Jan 27 12:54:11.398538 systemd[1]: Created slice kubepods-besteffort-pod7b273d03_9e4a_4d5f_a56c_d5eb5ded9cac.slice - libcontainer container kubepods-besteffort-pod7b273d03_9e4a_4d5f_a56c_d5eb5ded9cac.slice. Jan 27 12:54:11.459291 kubelet[2768]: I0127 12:54:11.459063 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac-whisker-backend-key-pair\") pod \"whisker-9dc77d7c4-lxzpr\" (UID: \"7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac\") " pod="calico-system/whisker-9dc77d7c4-lxzpr" Jan 27 12:54:11.459859 kubelet[2768]: I0127 12:54:11.459813 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzg99\" (UniqueName: \"kubernetes.io/projected/7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac-kube-api-access-nzg99\") pod \"whisker-9dc77d7c4-lxzpr\" (UID: \"7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac\") " pod="calico-system/whisker-9dc77d7c4-lxzpr" Jan 27 12:54:11.460247 kubelet[2768]: I0127 12:54:11.460050 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac-whisker-ca-bundle\") pod \"whisker-9dc77d7c4-lxzpr\" (UID: \"7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac\") " pod="calico-system/whisker-9dc77d7c4-lxzpr" Jan 27 12:54:11.712586 containerd[1598]: time="2026-01-27T12:54:11.712184629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9dc77d7c4-lxzpr,Uid:7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac,Namespace:calico-system,Attempt:0,}" Jan 27 12:54:12.023575 systemd-networkd[1508]: cali32c59706b0a: Link UP Jan 27 12:54:12.027401 systemd-networkd[1508]: cali32c59706b0a: Gained carrier Jan 27 12:54:12.049483 containerd[1598]: 2026-01-27 12:54:11.764 [INFO][3933] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 27 12:54:12.049483 containerd[1598]: 2026-01-27 12:54:11.802 [INFO][3933] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--9dc77d7c4--lxzpr-eth0 whisker-9dc77d7c4- calico-system 7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac 909 0 2026-01-27 12:54:11 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:9dc77d7c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-9dc77d7c4-lxzpr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali32c59706b0a [] [] }} ContainerID="9b810fbd2dc7f8a8df6fa65143ec85417b1c4400343a94a0281b41c791c1dbf6" Namespace="calico-system" Pod="whisker-9dc77d7c4-lxzpr" WorkloadEndpoint="localhost-k8s-whisker--9dc77d7c4--lxzpr-" Jan 27 12:54:12.049483 containerd[1598]: 2026-01-27 12:54:11.802 [INFO][3933] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b810fbd2dc7f8a8df6fa65143ec85417b1c4400343a94a0281b41c791c1dbf6" Namespace="calico-system" Pod="whisker-9dc77d7c4-lxzpr" WorkloadEndpoint="localhost-k8s-whisker--9dc77d7c4--lxzpr-eth0" Jan 27 12:54:12.049483 containerd[1598]: 2026-01-27 12:54:11.933 [INFO][3947] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b810fbd2dc7f8a8df6fa65143ec85417b1c4400343a94a0281b41c791c1dbf6" HandleID="k8s-pod-network.9b810fbd2dc7f8a8df6fa65143ec85417b1c4400343a94a0281b41c791c1dbf6" Workload="localhost-k8s-whisker--9dc77d7c4--lxzpr-eth0" Jan 27 12:54:12.049803 containerd[1598]: 2026-01-27 12:54:11.935 [INFO][3947] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9b810fbd2dc7f8a8df6fa65143ec85417b1c4400343a94a0281b41c791c1dbf6" HandleID="k8s-pod-network.9b810fbd2dc7f8a8df6fa65143ec85417b1c4400343a94a0281b41c791c1dbf6" Workload="localhost-k8s-whisker--9dc77d7c4--lxzpr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000400300), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-9dc77d7c4-lxzpr", "timestamp":"2026-01-27 12:54:11.933044051 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 12:54:12.049803 containerd[1598]: 2026-01-27 12:54:11.935 [INFO][3947] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 12:54:12.049803 containerd[1598]: 2026-01-27 12:54:11.936 [INFO][3947] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 12:54:12.049803 containerd[1598]: 2026-01-27 12:54:11.937 [INFO][3947] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 27 12:54:12.049803 containerd[1598]: 2026-01-27 12:54:11.952 [INFO][3947] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b810fbd2dc7f8a8df6fa65143ec85417b1c4400343a94a0281b41c791c1dbf6" host="localhost" Jan 27 12:54:12.049803 containerd[1598]: 2026-01-27 12:54:11.966 [INFO][3947] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 27 12:54:12.049803 containerd[1598]: 2026-01-27 12:54:11.976 [INFO][3947] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 27 12:54:12.049803 containerd[1598]: 2026-01-27 12:54:11.979 [INFO][3947] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 27 12:54:12.049803 containerd[1598]: 2026-01-27 12:54:11.984 [INFO][3947] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 27 12:54:12.049803 containerd[1598]: 2026-01-27 12:54:11.984 [INFO][3947] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9b810fbd2dc7f8a8df6fa65143ec85417b1c4400343a94a0281b41c791c1dbf6" host="localhost" Jan 27 12:54:12.050255 containerd[1598]: 2026-01-27 12:54:11.987 [INFO][3947] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9b810fbd2dc7f8a8df6fa65143ec85417b1c4400343a94a0281b41c791c1dbf6 Jan 27 12:54:12.050255 containerd[1598]: 2026-01-27 12:54:11.993 [INFO][3947] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9b810fbd2dc7f8a8df6fa65143ec85417b1c4400343a94a0281b41c791c1dbf6" host="localhost" Jan 27 12:54:12.050255 containerd[1598]: 2026-01-27 12:54:12.004 [INFO][3947] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9b810fbd2dc7f8a8df6fa65143ec85417b1c4400343a94a0281b41c791c1dbf6" host="localhost" Jan 27 12:54:12.050255 containerd[1598]: 2026-01-27 12:54:12.004 [INFO][3947] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9b810fbd2dc7f8a8df6fa65143ec85417b1c4400343a94a0281b41c791c1dbf6" host="localhost" Jan 27 12:54:12.050255 containerd[1598]: 2026-01-27 12:54:12.004 [INFO][3947] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 12:54:12.050255 containerd[1598]: 2026-01-27 12:54:12.004 [INFO][3947] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9b810fbd2dc7f8a8df6fa65143ec85417b1c4400343a94a0281b41c791c1dbf6" HandleID="k8s-pod-network.9b810fbd2dc7f8a8df6fa65143ec85417b1c4400343a94a0281b41c791c1dbf6" Workload="localhost-k8s-whisker--9dc77d7c4--lxzpr-eth0" Jan 27 12:54:12.050370 containerd[1598]: 2026-01-27 12:54:12.008 [INFO][3933] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b810fbd2dc7f8a8df6fa65143ec85417b1c4400343a94a0281b41c791c1dbf6" Namespace="calico-system" Pod="whisker-9dc77d7c4-lxzpr" WorkloadEndpoint="localhost-k8s-whisker--9dc77d7c4--lxzpr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--9dc77d7c4--lxzpr-eth0", GenerateName:"whisker-9dc77d7c4-", Namespace:"calico-system", SelfLink:"", UID:"7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 12, 54, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9dc77d7c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-9dc77d7c4-lxzpr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali32c59706b0a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 12:54:12.050370 containerd[1598]: 2026-01-27 12:54:12.008 [INFO][3933] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9b810fbd2dc7f8a8df6fa65143ec85417b1c4400343a94a0281b41c791c1dbf6" Namespace="calico-system" Pod="whisker-9dc77d7c4-lxzpr" WorkloadEndpoint="localhost-k8s-whisker--9dc77d7c4--lxzpr-eth0" Jan 27 12:54:12.050507 containerd[1598]: 2026-01-27 12:54:12.008 [INFO][3933] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32c59706b0a ContainerID="9b810fbd2dc7f8a8df6fa65143ec85417b1c4400343a94a0281b41c791c1dbf6" Namespace="calico-system" Pod="whisker-9dc77d7c4-lxzpr" WorkloadEndpoint="localhost-k8s-whisker--9dc77d7c4--lxzpr-eth0" Jan 27 12:54:12.050507 containerd[1598]: 2026-01-27 12:54:12.028 [INFO][3933] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b810fbd2dc7f8a8df6fa65143ec85417b1c4400343a94a0281b41c791c1dbf6" Namespace="calico-system" Pod="whisker-9dc77d7c4-lxzpr" WorkloadEndpoint="localhost-k8s-whisker--9dc77d7c4--lxzpr-eth0" Jan 27 12:54:12.050548 containerd[1598]: 2026-01-27 12:54:12.029 [INFO][3933] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b810fbd2dc7f8a8df6fa65143ec85417b1c4400343a94a0281b41c791c1dbf6" Namespace="calico-system" Pod="whisker-9dc77d7c4-lxzpr" WorkloadEndpoint="localhost-k8s-whisker--9dc77d7c4--lxzpr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--9dc77d7c4--lxzpr-eth0", GenerateName:"whisker-9dc77d7c4-", Namespace:"calico-system", SelfLink:"", UID:"7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 12, 54, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9dc77d7c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b810fbd2dc7f8a8df6fa65143ec85417b1c4400343a94a0281b41c791c1dbf6", Pod:"whisker-9dc77d7c4-lxzpr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali32c59706b0a", MAC:"06:be:7a:ac:82:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 12:54:12.050640 containerd[1598]: 2026-01-27 12:54:12.045 [INFO][3933] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b810fbd2dc7f8a8df6fa65143ec85417b1c4400343a94a0281b41c791c1dbf6" Namespace="calico-system" Pod="whisker-9dc77d7c4-lxzpr" WorkloadEndpoint="localhost-k8s-whisker--9dc77d7c4--lxzpr-eth0" Jan 27 12:54:12.212554 kubelet[2768]: E0127 12:54:12.212427 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:54:12.288363 containerd[1598]: time="2026-01-27T12:54:12.288135182Z" level=info msg="connecting to shim 9b810fbd2dc7f8a8df6fa65143ec85417b1c4400343a94a0281b41c791c1dbf6" address="unix:///run/containerd/s/2252c240a1dc3ad276e275643b47e899da6ac1213015db5844b5cbea86a58d8c" namespace=k8s.io protocol=ttrpc version=3 Jan 27 12:54:12.337543 systemd[1]: Started cri-containerd-9b810fbd2dc7f8a8df6fa65143ec85417b1c4400343a94a0281b41c791c1dbf6.scope - libcontainer container 9b810fbd2dc7f8a8df6fa65143ec85417b1c4400343a94a0281b41c791c1dbf6. Jan 27 12:54:12.381000 audit: BPF prog-id=175 op=LOAD Jan 27 12:54:12.383000 audit: BPF prog-id=176 op=LOAD Jan 27 12:54:12.383000 audit[4008]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3994 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.383000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962383130666264326463376638613864663666613635313433656338 Jan 27 12:54:12.383000 audit: BPF prog-id=176 op=UNLOAD Jan 27 12:54:12.383000 audit[4008]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3994 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.383000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962383130666264326463376638613864663666613635313433656338 Jan 27 12:54:12.385000 audit: BPF prog-id=177 op=LOAD Jan 27 12:54:12.385000 audit[4008]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3994 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962383130666264326463376638613864663666613635313433656338 Jan 27 12:54:12.385000 audit: BPF prog-id=178 op=LOAD Jan 27 12:54:12.385000 audit[4008]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3994 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962383130666264326463376638613864663666613635313433656338 Jan 27 12:54:12.386000 audit: BPF prog-id=178 op=UNLOAD Jan 27 12:54:12.386000 audit[4008]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3994 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.386000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962383130666264326463376638613864663666613635313433656338 Jan 27 12:54:12.386000 audit: BPF prog-id=177 op=UNLOAD Jan 27 12:54:12.386000 audit[4008]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3994 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.386000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962383130666264326463376638613864663666613635313433656338 Jan 27 12:54:12.386000 audit: BPF prog-id=179 op=LOAD Jan 27 12:54:12.386000 audit[4008]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3994 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.386000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962383130666264326463376638613864663666613635313433656338 Jan 27 12:54:12.394365 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 27 12:54:12.492858 containerd[1598]: time="2026-01-27T12:54:12.492759898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9dc77d7c4-lxzpr,Uid:7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b810fbd2dc7f8a8df6fa65143ec85417b1c4400343a94a0281b41c791c1dbf6\"" Jan 27 12:54:12.499990 containerd[1598]: time="2026-01-27T12:54:12.499823151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 27 12:54:12.584836 containerd[1598]: time="2026-01-27T12:54:12.584545506Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:54:12.591860 containerd[1598]: time="2026-01-27T12:54:12.591721987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 27 12:54:12.592456 containerd[1598]: time="2026-01-27T12:54:12.591835462Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 27 12:54:12.592970 kubelet[2768]: E0127 12:54:12.592772 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 12:54:12.593073 kubelet[2768]: E0127 12:54:12.593057 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 12:54:12.594140 kubelet[2768]: E0127 12:54:12.593839 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-9dc77d7c4-lxzpr_calico-system(7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 27 12:54:12.597415 containerd[1598]: time="2026-01-27T12:54:12.597083125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 27 12:54:12.680779 containerd[1598]: time="2026-01-27T12:54:12.680734352Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:54:12.683557 containerd[1598]: time="2026-01-27T12:54:12.682996526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 27 12:54:12.683557 containerd[1598]: time="2026-01-27T12:54:12.683049173Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 27 12:54:12.683675 kubelet[2768]: E0127 12:54:12.683370 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 12:54:12.683675 kubelet[2768]: E0127 12:54:12.683417 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 12:54:12.683675 kubelet[2768]: E0127 12:54:12.683503 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-9dc77d7c4-lxzpr_calico-system(7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 27 12:54:12.683675 kubelet[2768]: E0127 12:54:12.683556 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9dc77d7c4-lxzpr" podUID="7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac" Jan 27 12:54:12.727000 audit: BPF prog-id=180 op=LOAD Jan 27 12:54:12.727000 audit[4158]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffceb731140 a2=98 a3=1fffffffffffffff items=0 ppid=4072 pid=4158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.727000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 27 12:54:12.727000 audit: BPF prog-id=180 op=UNLOAD Jan 27 12:54:12.727000 audit[4158]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffceb731110 a3=0 items=0 ppid=4072 pid=4158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.727000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 27 12:54:12.727000 audit: BPF prog-id=181 op=LOAD Jan 27 12:54:12.727000 audit[4158]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffceb731020 a2=94 a3=3 items=0 ppid=4072 pid=4158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.727000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 27 12:54:12.727000 audit: BPF prog-id=181 op=UNLOAD Jan 27 12:54:12.727000 audit[4158]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffceb731020 a2=94 a3=3 items=0 ppid=4072 pid=4158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.727000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 27 12:54:12.728000 audit: BPF prog-id=182 op=LOAD Jan 27 12:54:12.728000 audit[4158]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffceb731060 a2=94 a3=7ffceb731240 items=0 ppid=4072 pid=4158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.728000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 27 12:54:12.728000 audit: BPF prog-id=182 op=UNLOAD Jan 27 12:54:12.728000 audit[4158]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffceb731060 a2=94 a3=7ffceb731240 items=0 ppid=4072 pid=4158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.728000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 27 12:54:12.731000 audit: BPF prog-id=183 op=LOAD Jan 27 12:54:12.731000 audit[4161]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd77c5c420 a2=98 a3=3 items=0 ppid=4072 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.731000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 12:54:12.731000 audit: BPF prog-id=183 op=UNLOAD Jan 27 12:54:12.731000 audit[4161]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd77c5c3f0 a3=0 items=0 ppid=4072 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.731000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 12:54:12.731000 audit: BPF prog-id=184 op=LOAD Jan 27 12:54:12.731000 audit[4161]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd77c5c210 a2=94 a3=54428f items=0 ppid=4072 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.731000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 12:54:12.731000 audit: BPF prog-id=184 op=UNLOAD Jan 27 12:54:12.731000 audit[4161]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd77c5c210 a2=94 a3=54428f items=0 ppid=4072 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.731000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 12:54:12.732000 audit: BPF prog-id=185 op=LOAD Jan 27 12:54:12.732000 audit[4161]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd77c5c240 a2=94 a3=2 items=0 ppid=4072 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.732000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 12:54:12.732000 audit: BPF prog-id=185 op=UNLOAD Jan 27 12:54:12.732000 audit[4161]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd77c5c240 a2=0 a3=2 items=0 ppid=4072 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.732000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 12:54:12.946000 audit: BPF prog-id=186 op=LOAD Jan 27 12:54:12.946000 audit[4161]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd77c5c100 a2=94 a3=1 items=0 ppid=4072 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.946000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 12:54:12.946000 audit: BPF prog-id=186 op=UNLOAD Jan 27 12:54:12.946000 audit[4161]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd77c5c100 a2=94 a3=1 items=0 ppid=4072 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.946000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 12:54:12.955000 audit: BPF prog-id=187 op=LOAD Jan 27 12:54:12.955000 audit[4161]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd77c5c0f0 a2=94 a3=4 items=0 ppid=4072 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.955000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 12:54:12.956000 audit: BPF prog-id=187 op=UNLOAD Jan 27 12:54:12.956000 audit[4161]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd77c5c0f0 a2=0 a3=4 items=0 ppid=4072 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.956000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 12:54:12.956000 audit: BPF prog-id=188 op=LOAD Jan 27 12:54:12.956000 audit[4161]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd77c5bf50 a2=94 a3=5 items=0 ppid=4072 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.956000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 12:54:12.956000 audit: BPF prog-id=188 op=UNLOAD Jan 27 12:54:12.956000 audit[4161]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd77c5bf50 a2=0 a3=5 items=0 ppid=4072 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.956000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 12:54:12.956000 audit: BPF prog-id=189 op=LOAD Jan 27 12:54:12.956000 audit[4161]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd77c5c170 a2=94 a3=6 items=0 ppid=4072 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.956000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 12:54:12.956000 audit: BPF prog-id=189 op=UNLOAD Jan 27 12:54:12.956000 audit[4161]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd77c5c170 a2=0 a3=6 items=0 ppid=4072 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.956000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 12:54:12.956000 audit: BPF prog-id=190 op=LOAD Jan 27 12:54:12.956000 audit[4161]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd77c5b920 a2=94 a3=88 items=0 ppid=4072 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.956000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 12:54:12.957000 audit: BPF prog-id=191 op=LOAD Jan 27 12:54:12.957000 audit[4161]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd77c5b7a0 a2=94 a3=2 items=0 ppid=4072 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.957000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 12:54:12.957000 audit: BPF prog-id=191 op=UNLOAD Jan 27 12:54:12.957000 audit[4161]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd77c5b7d0 a2=0 a3=7ffd77c5b8d0 items=0 ppid=4072 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.957000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 12:54:12.957000 audit: BPF prog-id=190 op=UNLOAD Jan 27 12:54:12.957000 audit[4161]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2e885d10 a2=0 a3=f15a116e42126270 items=0 ppid=4072 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.957000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 12:54:12.975000 audit: BPF prog-id=192 op=LOAD Jan 27 12:54:12.975000 audit[4164]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcaa658220 a2=98 a3=1999999999999999 items=0 ppid=4072 pid=4164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.975000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 27 12:54:12.975000 audit: BPF prog-id=192 op=UNLOAD Jan 27 12:54:12.975000 audit[4164]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcaa6581f0 a3=0 items=0 ppid=4072 pid=4164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.975000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 27 12:54:12.975000 audit: BPF prog-id=193 op=LOAD Jan 27 12:54:12.975000 audit[4164]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcaa658100 a2=94 a3=ffff items=0 ppid=4072 pid=4164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.975000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 27 12:54:12.975000 audit: BPF prog-id=193 op=UNLOAD Jan 27 12:54:12.975000 audit[4164]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcaa658100 a2=94 a3=ffff items=0 ppid=4072 pid=4164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.975000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 27 12:54:12.975000 audit: BPF prog-id=194 op=LOAD Jan 27 12:54:12.975000 audit[4164]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcaa658140 a2=94 a3=7ffcaa658320 items=0 ppid=4072 pid=4164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.975000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 27 12:54:12.975000 audit: BPF prog-id=194 op=UNLOAD Jan 27 12:54:12.975000 audit[4164]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcaa658140 a2=94 a3=7ffcaa658320 items=0 ppid=4072 pid=4164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:12.975000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 27 12:54:12.990729 kubelet[2768]: I0127 12:54:12.990586 2768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="545117a9-f0f3-4779-b9f4-2ae365c2cf4f" path="/var/lib/kubelet/pods/545117a9-f0f3-4779-b9f4-2ae365c2cf4f/volumes" Jan 27 12:54:12.996514 containerd[1598]: time="2026-01-27T12:54:12.996472475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5vwvj,Uid:6af69036-827e-49bb-8e7c-3940b856830f,Namespace:calico-system,Attempt:0,}" Jan 27 12:54:13.115602 systemd-networkd[1508]: vxlan.calico: Link UP Jan 27 12:54:13.115619 systemd-networkd[1508]: vxlan.calico: Gained carrier Jan 27 12:54:13.161000 audit: BPF prog-id=195 op=LOAD Jan 27 12:54:13.161000 audit[4214]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff74eb1470 a2=98 a3=0 items=0 ppid=4072 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.161000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 12:54:13.161000 audit: BPF prog-id=195 op=UNLOAD Jan 27 12:54:13.161000 audit[4214]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff74eb1440 a3=0 items=0 ppid=4072 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.161000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 12:54:13.161000 audit: BPF prog-id=196 op=LOAD Jan 27 12:54:13.161000 audit[4214]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff74eb1280 a2=94 a3=54428f items=0 ppid=4072 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.161000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 12:54:13.161000 audit: BPF prog-id=196 op=UNLOAD Jan 27 12:54:13.161000 audit[4214]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff74eb1280 a2=94 a3=54428f items=0 ppid=4072 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.161000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 12:54:13.161000 audit: BPF prog-id=197 op=LOAD Jan 27 12:54:13.161000 audit[4214]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff74eb12b0 a2=94 a3=2 items=0 ppid=4072 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.161000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 12:54:13.161000 audit: BPF prog-id=197 op=UNLOAD Jan 27 12:54:13.161000 audit[4214]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff74eb12b0 a2=0 a3=2 items=0 ppid=4072 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.161000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 12:54:13.161000 audit: BPF prog-id=198 op=LOAD Jan 27 12:54:13.161000 audit[4214]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff74eb1060 a2=94 a3=4 items=0 ppid=4072 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.161000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 12:54:13.161000 audit: BPF prog-id=198 op=UNLOAD Jan 27 12:54:13.161000 audit[4214]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff74eb1060 a2=94 a3=4 items=0 ppid=4072 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.161000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 12:54:13.161000 audit: BPF prog-id=199 op=LOAD Jan 27 12:54:13.161000 audit[4214]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff74eb1160 a2=94 a3=7fff74eb12e0 items=0 ppid=4072 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.161000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 12:54:13.161000 audit: BPF prog-id=199 op=UNLOAD Jan 27 12:54:13.161000 audit[4214]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff74eb1160 a2=0 a3=7fff74eb12e0 items=0 ppid=4072 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.161000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 12:54:13.165000 audit: BPF prog-id=200 op=LOAD Jan 27 12:54:13.165000 audit[4214]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff74eb0890 a2=94 a3=2 items=0 ppid=4072 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.165000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 12:54:13.165000 audit: BPF prog-id=200 op=UNLOAD Jan 27 12:54:13.165000 audit[4214]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff74eb0890 a2=0 a3=2 items=0 ppid=4072 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.165000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 12:54:13.166000 audit: BPF prog-id=201 op=LOAD Jan 27 12:54:13.166000 audit[4214]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff74eb0990 a2=94 a3=30 items=0 ppid=4072 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.166000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 12:54:13.183000 audit: BPF prog-id=202 op=LOAD Jan 27 12:54:13.183000 audit[4219]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc5edb93e0 a2=98 a3=0 items=0 ppid=4072 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.183000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 12:54:13.184000 audit: BPF prog-id=202 op=UNLOAD Jan 27 12:54:13.184000 audit[4219]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc5edb93b0 a3=0 items=0 ppid=4072 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.184000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 12:54:13.184000 audit: BPF prog-id=203 op=LOAD Jan 27 12:54:13.184000 audit[4219]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc5edb91d0 a2=94 a3=54428f items=0 ppid=4072 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.184000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 12:54:13.184000 audit: BPF prog-id=203 op=UNLOAD Jan 27 12:54:13.184000 audit[4219]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc5edb91d0 a2=94 a3=54428f items=0 ppid=4072 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.184000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 12:54:13.184000 audit: BPF prog-id=204 op=LOAD Jan 27 12:54:13.184000 audit[4219]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc5edb9200 a2=94 a3=2 items=0 ppid=4072 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.184000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 12:54:13.184000 audit: BPF prog-id=204 op=UNLOAD Jan 27 12:54:13.184000 audit[4219]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc5edb9200 a2=0 a3=2 items=0 ppid=4072 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.184000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 12:54:13.227439 kubelet[2768]: E0127 12:54:13.225532 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:54:13.233978 kubelet[2768]: E0127 12:54:13.233017 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9dc77d7c4-lxzpr" podUID="7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac" Jan 27 12:54:13.316999 kernel: kauditd_printk_skb: 180 callbacks suppressed Jan 27 12:54:13.317110 kernel: audit: type=1325 audit(1769518453.301:628): table=filter:117 family=2 entries=20 op=nft_register_rule pid=4250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:54:13.301000 audit[4250]: NETFILTER_CFG table=filter:117 family=2 entries=20 op=nft_register_rule pid=4250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:54:13.314045 systemd-networkd[1508]: cali85860f091b8: Link UP Jan 27 12:54:13.317760 systemd-networkd[1508]: cali85860f091b8: Gained carrier Jan 27 12:54:13.301000 audit[4250]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffeeeb76300 a2=0 a3=7ffeeeb762ec items=0 ppid=2925 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.344359 kernel: audit: type=1300 audit(1769518453.301:628): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffeeeb76300 a2=0 a3=7ffeeeb762ec items=0 ppid=2925 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.344426 kernel: audit: type=1327 audit(1769518453.301:628): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:54:13.301000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:54:13.333000 audit[4250]: NETFILTER_CFG table=nat:118 family=2 entries=14 op=nft_register_rule pid=4250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:54:13.353576 kernel: audit: type=1325 audit(1769518453.333:629): table=nat:118 family=2 entries=14 op=nft_register_rule pid=4250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:54:13.353635 kernel: audit: type=1300 audit(1769518453.333:629): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffeeeb76300 a2=0 a3=0 items=0 ppid=2925 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.333000 audit[4250]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffeeeb76300 a2=0 a3=0 items=0 ppid=2925 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.333000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:54:13.381104 kernel: audit: type=1327 audit(1769518453.333:629): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:54:13.381181 containerd[1598]: 2026-01-27 12:54:13.090 [INFO][4178] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--5vwvj-eth0 csi-node-driver- calico-system 6af69036-827e-49bb-8e7c-3940b856830f 718 0 2026-01-27 12:53:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-5vwvj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali85860f091b8 [] [] }} ContainerID="b83e659c5b89f05baf09794ae3a9de166794410ae3b2d575953e83121c0a3194" Namespace="calico-system" Pod="csi-node-driver-5vwvj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5vwvj-" Jan 27 12:54:13.381181 containerd[1598]: 2026-01-27 12:54:13.091 [INFO][4178] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b83e659c5b89f05baf09794ae3a9de166794410ae3b2d575953e83121c0a3194" Namespace="calico-system" Pod="csi-node-driver-5vwvj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5vwvj-eth0" Jan 27 12:54:13.381181 containerd[1598]: 2026-01-27 12:54:13.157 [INFO][4194] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b83e659c5b89f05baf09794ae3a9de166794410ae3b2d575953e83121c0a3194" HandleID="k8s-pod-network.b83e659c5b89f05baf09794ae3a9de166794410ae3b2d575953e83121c0a3194" Workload="localhost-k8s-csi--node--driver--5vwvj-eth0" Jan 27 12:54:13.381433 containerd[1598]: 2026-01-27 12:54:13.158 [INFO][4194] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b83e659c5b89f05baf09794ae3a9de166794410ae3b2d575953e83121c0a3194" HandleID="k8s-pod-network.b83e659c5b89f05baf09794ae3a9de166794410ae3b2d575953e83121c0a3194" Workload="localhost-k8s-csi--node--driver--5vwvj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000341aa0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-5vwvj", "timestamp":"2026-01-27 12:54:13.157380101 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 12:54:13.381433 containerd[1598]: 2026-01-27 12:54:13.159 [INFO][4194] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 12:54:13.381433 containerd[1598]: 2026-01-27 12:54:13.159 [INFO][4194] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 12:54:13.381433 containerd[1598]: 2026-01-27 12:54:13.159 [INFO][4194] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 27 12:54:13.381433 containerd[1598]: 2026-01-27 12:54:13.179 [INFO][4194] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b83e659c5b89f05baf09794ae3a9de166794410ae3b2d575953e83121c0a3194" host="localhost" Jan 27 12:54:13.381433 containerd[1598]: 2026-01-27 12:54:13.199 [INFO][4194] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 27 12:54:13.381433 containerd[1598]: 2026-01-27 12:54:13.215 [INFO][4194] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 27 12:54:13.381433 containerd[1598]: 2026-01-27 12:54:13.219 [INFO][4194] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 27 12:54:13.381433 containerd[1598]: 2026-01-27 12:54:13.229 [INFO][4194] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 27 12:54:13.381433 containerd[1598]: 2026-01-27 12:54:13.229 [INFO][4194] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b83e659c5b89f05baf09794ae3a9de166794410ae3b2d575953e83121c0a3194" host="localhost" Jan 27 12:54:13.382042 containerd[1598]: 2026-01-27 12:54:13.233 [INFO][4194] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b83e659c5b89f05baf09794ae3a9de166794410ae3b2d575953e83121c0a3194 Jan 27 12:54:13.382042 containerd[1598]: 2026-01-27 12:54:13.272 [INFO][4194] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b83e659c5b89f05baf09794ae3a9de166794410ae3b2d575953e83121c0a3194" host="localhost" Jan 27 12:54:13.382042 containerd[1598]: 2026-01-27 12:54:13.292 [INFO][4194] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b83e659c5b89f05baf09794ae3a9de166794410ae3b2d575953e83121c0a3194" host="localhost" Jan 27 12:54:13.382042 containerd[1598]: 2026-01-27 12:54:13.292 [INFO][4194] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b83e659c5b89f05baf09794ae3a9de166794410ae3b2d575953e83121c0a3194" host="localhost" Jan 27 12:54:13.382042 containerd[1598]: 2026-01-27 12:54:13.292 [INFO][4194] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 12:54:13.382042 containerd[1598]: 2026-01-27 12:54:13.292 [INFO][4194] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b83e659c5b89f05baf09794ae3a9de166794410ae3b2d575953e83121c0a3194" HandleID="k8s-pod-network.b83e659c5b89f05baf09794ae3a9de166794410ae3b2d575953e83121c0a3194" Workload="localhost-k8s-csi--node--driver--5vwvj-eth0" Jan 27 12:54:13.382218 containerd[1598]: 2026-01-27 12:54:13.305 [INFO][4178] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b83e659c5b89f05baf09794ae3a9de166794410ae3b2d575953e83121c0a3194" Namespace="calico-system" Pod="csi-node-driver-5vwvj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5vwvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5vwvj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6af69036-827e-49bb-8e7c-3940b856830f", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 12, 53, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-5vwvj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali85860f091b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 12:54:13.382369 containerd[1598]: 2026-01-27 12:54:13.305 [INFO][4178] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b83e659c5b89f05baf09794ae3a9de166794410ae3b2d575953e83121c0a3194" Namespace="calico-system" Pod="csi-node-driver-5vwvj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5vwvj-eth0" Jan 27 12:54:13.382369 containerd[1598]: 2026-01-27 12:54:13.305 [INFO][4178] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85860f091b8 ContainerID="b83e659c5b89f05baf09794ae3a9de166794410ae3b2d575953e83121c0a3194" Namespace="calico-system" Pod="csi-node-driver-5vwvj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5vwvj-eth0" Jan 27 12:54:13.382369 containerd[1598]: 2026-01-27 12:54:13.317 [INFO][4178] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b83e659c5b89f05baf09794ae3a9de166794410ae3b2d575953e83121c0a3194" Namespace="calico-system" Pod="csi-node-driver-5vwvj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5vwvj-eth0" Jan 27 12:54:13.382476 containerd[1598]: 2026-01-27 12:54:13.333 [INFO][4178] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b83e659c5b89f05baf09794ae3a9de166794410ae3b2d575953e83121c0a3194" Namespace="calico-system" Pod="csi-node-driver-5vwvj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5vwvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5vwvj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6af69036-827e-49bb-8e7c-3940b856830f", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 12, 53, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b83e659c5b89f05baf09794ae3a9de166794410ae3b2d575953e83121c0a3194", Pod:"csi-node-driver-5vwvj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali85860f091b8", MAC:"76:59:b6:6c:c3:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 12:54:13.385597 containerd[1598]: 2026-01-27 12:54:13.361 [INFO][4178] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b83e659c5b89f05baf09794ae3a9de166794410ae3b2d575953e83121c0a3194" Namespace="calico-system" Pod="csi-node-driver-5vwvj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5vwvj-eth0" Jan 27 12:54:13.445310 containerd[1598]: time="2026-01-27T12:54:13.445235549Z" level=info msg="connecting to shim b83e659c5b89f05baf09794ae3a9de166794410ae3b2d575953e83121c0a3194" address="unix:///run/containerd/s/3aa0a474ce850a3b4c3db222fd23c5e0e66361da050d6372f939a1f5932deb34" namespace=k8s.io protocol=ttrpc version=3 Jan 27 12:54:13.503549 systemd[1]: Started cri-containerd-b83e659c5b89f05baf09794ae3a9de166794410ae3b2d575953e83121c0a3194.scope - libcontainer container b83e659c5b89f05baf09794ae3a9de166794410ae3b2d575953e83121c0a3194. Jan 27 12:54:13.529000 audit: BPF prog-id=205 op=LOAD Jan 27 12:54:13.531000 audit: BPF prog-id=206 op=LOAD Jan 27 12:54:13.538407 kernel: audit: type=1334 audit(1769518453.529:630): prog-id=205 op=LOAD Jan 27 12:54:13.538564 kernel: audit: type=1334 audit(1769518453.531:631): prog-id=206 op=LOAD Jan 27 12:54:13.538823 kernel: audit: type=1300 audit(1769518453.531:631): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4270 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.531000 audit[4282]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4270 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.539208 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 27 12:54:13.556025 kernel: audit: type=1327 audit(1769518453.531:631): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238336536353963356238396630356261663039373934616533613964 Jan 27 12:54:13.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238336536353963356238396630356261663039373934616533613964 Jan 27 12:54:13.531000 audit: BPF prog-id=206 op=UNLOAD Jan 27 12:54:13.531000 audit[4282]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4270 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238336536353963356238396630356261663039373934616533613964 Jan 27 12:54:13.531000 audit: BPF prog-id=207 op=LOAD Jan 27 12:54:13.531000 audit[4282]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4270 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238336536353963356238396630356261663039373934616533613964 Jan 27 12:54:13.532000 audit: BPF prog-id=208 op=LOAD Jan 27 12:54:13.532000 audit[4282]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4270 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238336536353963356238396630356261663039373934616533613964 Jan 27 12:54:13.532000 audit: BPF prog-id=208 op=UNLOAD Jan 27 12:54:13.532000 audit[4282]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4270 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238336536353963356238396630356261663039373934616533613964 Jan 27 12:54:13.532000 audit: BPF prog-id=207 op=UNLOAD Jan 27 12:54:13.532000 audit[4282]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4270 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238336536353963356238396630356261663039373934616533613964 Jan 27 12:54:13.532000 audit: BPF prog-id=209 op=LOAD Jan 27 12:54:13.532000 audit[4282]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4270 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238336536353963356238396630356261663039373934616533613964 Jan 27 12:54:13.554000 audit: BPF prog-id=210 op=LOAD Jan 27 12:54:13.554000 audit[4219]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc5edb90c0 a2=94 a3=1 items=0 ppid=4072 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.554000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 12:54:13.554000 audit: BPF prog-id=210 op=UNLOAD Jan 27 12:54:13.554000 audit[4219]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc5edb90c0 a2=94 a3=1 items=0 ppid=4072 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.554000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 12:54:13.565000 audit: BPF prog-id=211 op=LOAD Jan 27 12:54:13.565000 audit[4219]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc5edb90b0 a2=94 a3=4 items=0 ppid=4072 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.565000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 12:54:13.565000 audit: BPF prog-id=211 op=UNLOAD Jan 27 12:54:13.565000 audit[4219]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc5edb90b0 a2=0 a3=4 items=0 ppid=4072 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.565000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 12:54:13.566000 audit: BPF prog-id=212 op=LOAD Jan 27 12:54:13.566000 audit[4219]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc5edb8f10 a2=94 a3=5 items=0 ppid=4072 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.566000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 12:54:13.566000 audit: BPF prog-id=212 op=UNLOAD Jan 27 12:54:13.566000 audit[4219]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc5edb8f10 a2=0 a3=5 items=0 ppid=4072 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.566000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 12:54:13.566000 audit: BPF prog-id=213 op=LOAD Jan 27 12:54:13.566000 audit[4219]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc5edb9130 a2=94 a3=6 items=0 ppid=4072 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.566000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 12:54:13.566000 audit: BPF prog-id=213 op=UNLOAD Jan 27 12:54:13.566000 audit[4219]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc5edb9130 a2=0 a3=6 items=0 ppid=4072 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.566000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 12:54:13.566000 audit: BPF prog-id=214 op=LOAD Jan 27 12:54:13.566000 audit[4219]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc5edb88e0 a2=94 a3=88 items=0 ppid=4072 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.566000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 12:54:13.566000 audit: BPF prog-id=215 op=LOAD Jan 27 12:54:13.566000 audit[4219]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc5edb8760 a2=94 a3=2 items=0 ppid=4072 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.566000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 12:54:13.577000 audit: BPF prog-id=201 op=UNLOAD Jan 27 12:54:13.577000 audit[4072]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0006ae6c0 a2=0 a3=0 items=0 ppid=4027 pid=4072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.577000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 27 12:54:13.599220 containerd[1598]: time="2026-01-27T12:54:13.599104987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5vwvj,Uid:6af69036-827e-49bb-8e7c-3940b856830f,Namespace:calico-system,Attempt:0,} returns sandbox id \"b83e659c5b89f05baf09794ae3a9de166794410ae3b2d575953e83121c0a3194\"" Jan 27 12:54:13.605051 containerd[1598]: time="2026-01-27T12:54:13.604815206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 27 12:54:13.673000 audit[4330]: NETFILTER_CFG table=mangle:119 family=2 entries=16 op=nft_register_chain pid=4330 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 12:54:13.673000 audit[4330]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fffefc5ca30 a2=0 a3=7fffefc5ca1c items=0 ppid=4072 pid=4330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.673000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 12:54:13.674000 audit[4328]: NETFILTER_CFG table=nat:120 family=2 entries=15 op=nft_register_chain pid=4328 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 12:54:13.674000 audit[4328]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffccc030d60 a2=0 a3=7ffccc030d4c items=0 ppid=4072 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.674000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 12:54:13.678587 containerd[1598]: time="2026-01-27T12:54:13.678428088Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:54:13.681060 containerd[1598]: time="2026-01-27T12:54:13.680444492Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 27 12:54:13.681060 containerd[1598]: time="2026-01-27T12:54:13.680535021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 27 12:54:13.681212 kubelet[2768]: E0127 12:54:13.681144 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 12:54:13.681212 kubelet[2768]: E0127 12:54:13.681197 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 12:54:13.681405 kubelet[2768]: E0127 12:54:13.681318 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-5vwvj_calico-system(6af69036-827e-49bb-8e7c-3940b856830f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 27 12:54:13.686848 containerd[1598]: time="2026-01-27T12:54:13.686562242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 27 12:54:13.689000 audit[4329]: NETFILTER_CFG table=raw:121 family=2 entries=21 op=nft_register_chain pid=4329 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 12:54:13.689000 audit[4329]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fffa5740620 a2=0 a3=7fffa574060c items=0 ppid=4072 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.689000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 12:54:13.704000 audit[4332]: NETFILTER_CFG table=filter:122 family=2 entries=94 op=nft_register_chain pid=4332 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 12:54:13.704000 audit[4332]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7fff6ebffb00 a2=0 a3=7fff6ebffaec items=0 ppid=4072 pid=4332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.704000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 12:54:13.764060 containerd[1598]: time="2026-01-27T12:54:13.763843110Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:54:13.767360 containerd[1598]: time="2026-01-27T12:54:13.767059263Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 27 12:54:13.767360 containerd[1598]: time="2026-01-27T12:54:13.767114636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 27 12:54:13.768126 kubelet[2768]: E0127 12:54:13.767790 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 12:54:13.768302 kubelet[2768]: E0127 12:54:13.767887 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 12:54:13.768302 kubelet[2768]: E0127 12:54:13.768218 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-5vwvj_calico-system(6af69036-827e-49bb-8e7c-3940b856830f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 27 12:54:13.768302 kubelet[2768]: E0127 12:54:13.768271 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5vwvj" podUID="6af69036-827e-49bb-8e7c-3940b856830f" Jan 27 12:54:13.779000 audit[4342]: NETFILTER_CFG table=filter:123 family=2 entries=36 op=nft_register_chain pid=4342 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 12:54:13.779000 audit[4342]: SYSCALL arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7fffdf4f7d80 a2=0 a3=7fffdf4f7d6c items=0 ppid=4072 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:13.779000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 12:54:13.951303 systemd-networkd[1508]: cali32c59706b0a: Gained IPv6LL Jan 27 12:54:13.991049 kubelet[2768]: E0127 12:54:13.990872 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:54:13.992455 containerd[1598]: time="2026-01-27T12:54:13.992090514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rj6rv,Uid:da6ec3a8-0f71-43a9-9c60-07db37f3df34,Namespace:kube-system,Attempt:0,}" Jan 27 12:54:14.183465 systemd-networkd[1508]: cali9fd623959c3: Link UP Jan 27 12:54:14.184751 systemd-networkd[1508]: cali9fd623959c3: Gained carrier Jan 27 12:54:14.209319 containerd[1598]: 2026-01-27 12:54:14.056 [INFO][4343] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--rj6rv-eth0 coredns-66bc5c9577- kube-system da6ec3a8-0f71-43a9-9c60-07db37f3df34 828 0 2026-01-27 12:53:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-rj6rv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9fd623959c3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6" Namespace="kube-system" Pod="coredns-66bc5c9577-rj6rv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rj6rv-" Jan 27 12:54:14.209319 containerd[1598]: 2026-01-27 12:54:14.056 [INFO][4343] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6" Namespace="kube-system" Pod="coredns-66bc5c9577-rj6rv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rj6rv-eth0" Jan 27 12:54:14.209319 containerd[1598]: 2026-01-27 12:54:14.098 [INFO][4358] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6" HandleID="k8s-pod-network.8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6" Workload="localhost-k8s-coredns--66bc5c9577--rj6rv-eth0" Jan 27 12:54:14.209855 containerd[1598]: 2026-01-27 12:54:14.098 [INFO][4358] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6" HandleID="k8s-pod-network.8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6" Workload="localhost-k8s-coredns--66bc5c9577--rj6rv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001ad6e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-rj6rv", "timestamp":"2026-01-27 12:54:14.098448718 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 12:54:14.209855 containerd[1598]: 2026-01-27 12:54:14.098 [INFO][4358] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 12:54:14.209855 containerd[1598]: 2026-01-27 12:54:14.099 [INFO][4358] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 12:54:14.209855 containerd[1598]: 2026-01-27 12:54:14.099 [INFO][4358] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 27 12:54:14.209855 containerd[1598]: 2026-01-27 12:54:14.113 [INFO][4358] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6" host="localhost" Jan 27 12:54:14.209855 containerd[1598]: 2026-01-27 12:54:14.125 [INFO][4358] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 27 12:54:14.209855 containerd[1598]: 2026-01-27 12:54:14.141 [INFO][4358] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 27 12:54:14.209855 containerd[1598]: 2026-01-27 12:54:14.145 [INFO][4358] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 27 12:54:14.209855 containerd[1598]: 2026-01-27 12:54:14.150 [INFO][4358] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 27 12:54:14.209855 containerd[1598]: 2026-01-27 12:54:14.150 [INFO][4358] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6" host="localhost" Jan 27 12:54:14.212057 containerd[1598]: 2026-01-27 12:54:14.154 [INFO][4358] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6 Jan 27 12:54:14.212057 containerd[1598]: 2026-01-27 12:54:14.166 [INFO][4358] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6" host="localhost" Jan 27 12:54:14.212057 containerd[1598]: 2026-01-27 12:54:14.174 [INFO][4358] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6" host="localhost" Jan 27 12:54:14.212057 containerd[1598]: 2026-01-27 12:54:14.174 [INFO][4358] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6" host="localhost" Jan 27 12:54:14.212057 containerd[1598]: 2026-01-27 12:54:14.174 [INFO][4358] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 12:54:14.212057 containerd[1598]: 2026-01-27 12:54:14.175 [INFO][4358] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6" HandleID="k8s-pod-network.8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6" Workload="localhost-k8s-coredns--66bc5c9577--rj6rv-eth0" Jan 27 12:54:14.212247 containerd[1598]: 2026-01-27 12:54:14.179 [INFO][4343] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6" Namespace="kube-system" Pod="coredns-66bc5c9577-rj6rv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rj6rv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--rj6rv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"da6ec3a8-0f71-43a9-9c60-07db37f3df34", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 12, 53, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-rj6rv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9fd623959c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 12:54:14.212247 containerd[1598]: 2026-01-27 12:54:14.179 [INFO][4343] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6" Namespace="kube-system" Pod="coredns-66bc5c9577-rj6rv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rj6rv-eth0" Jan 27 12:54:14.212247 containerd[1598]: 2026-01-27 12:54:14.179 [INFO][4343] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9fd623959c3 ContainerID="8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6" Namespace="kube-system" Pod="coredns-66bc5c9577-rj6rv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rj6rv-eth0" Jan 27 12:54:14.212247 containerd[1598]: 2026-01-27 12:54:14.185 [INFO][4343] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6" Namespace="kube-system" Pod="coredns-66bc5c9577-rj6rv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rj6rv-eth0" Jan 27 12:54:14.212247 containerd[1598]: 2026-01-27 12:54:14.186 [INFO][4343] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6" Namespace="kube-system" Pod="coredns-66bc5c9577-rj6rv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rj6rv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--rj6rv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"da6ec3a8-0f71-43a9-9c60-07db37f3df34", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 12, 53, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6", Pod:"coredns-66bc5c9577-rj6rv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9fd623959c3", MAC:"f6:84:cb:2b:5f:0d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 12:54:14.212247 containerd[1598]: 2026-01-27 12:54:14.204 [INFO][4343] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6" Namespace="kube-system" Pod="coredns-66bc5c9577-rj6rv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rj6rv-eth0" Jan 27 12:54:14.232000 audit[4377]: NETFILTER_CFG table=filter:124 family=2 entries=46 op=nft_register_chain pid=4377 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 12:54:14.232000 audit[4377]: SYSCALL arch=c000003e syscall=46 success=yes exit=23740 a0=3 a1=7ffc43d92d30 a2=0 a3=7ffc43d92d1c items=0 ppid=4072 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:14.232000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 12:54:14.235151 kubelet[2768]: E0127 12:54:14.234366 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5vwvj" podUID="6af69036-827e-49bb-8e7c-3940b856830f" Jan 27 12:54:14.252499 kubelet[2768]: E0127 12:54:14.252402 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9dc77d7c4-lxzpr" podUID="7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac" Jan 27 12:54:14.269389 containerd[1598]: time="2026-01-27T12:54:14.269340200Z" level=info msg="connecting to shim 8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6" address="unix:///run/containerd/s/12426799c287cf2422c403a2c8563e64af7ce1895eeca669a48f637452ea6d10" namespace=k8s.io protocol=ttrpc version=3 Jan 27 12:54:14.340264 systemd[1]: Started cri-containerd-8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6.scope - libcontainer container 8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6. Jan 27 12:54:14.364000 audit: BPF prog-id=216 op=LOAD Jan 27 12:54:14.365000 audit: BPF prog-id=217 op=LOAD Jan 27 12:54:14.365000 audit[4397]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4386 pid=4397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:14.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834323761316564633337633739326163383962663538363966333362 Jan 27 12:54:14.365000 audit: BPF prog-id=217 op=UNLOAD Jan 27 12:54:14.365000 audit[4397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4386 pid=4397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:14.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834323761316564633337633739326163383962663538363966333362 Jan 27 12:54:14.365000 audit: BPF prog-id=218 op=LOAD Jan 27 12:54:14.365000 audit[4397]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4386 pid=4397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:14.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834323761316564633337633739326163383962663538363966333362 Jan 27 12:54:14.366000 audit: BPF prog-id=219 op=LOAD Jan 27 12:54:14.366000 audit[4397]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4386 pid=4397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:14.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834323761316564633337633739326163383962663538363966333362 Jan 27 12:54:14.366000 audit: BPF prog-id=219 op=UNLOAD Jan 27 12:54:14.366000 audit[4397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4386 pid=4397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:14.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834323761316564633337633739326163383962663538363966333362 Jan 27 12:54:14.366000 audit: BPF prog-id=218 op=UNLOAD Jan 27 12:54:14.366000 audit[4397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4386 pid=4397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:14.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834323761316564633337633739326163383962663538363966333362 Jan 27 12:54:14.366000 audit: BPF prog-id=220 op=LOAD Jan 27 12:54:14.366000 audit[4397]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4386 pid=4397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:14.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834323761316564633337633739326163383962663538363966333362 Jan 27 12:54:14.369077 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 27 12:54:14.400799 systemd-networkd[1508]: cali85860f091b8: Gained IPv6LL Jan 27 12:54:14.421391 containerd[1598]: time="2026-01-27T12:54:14.421352800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rj6rv,Uid:da6ec3a8-0f71-43a9-9c60-07db37f3df34,Namespace:kube-system,Attempt:0,} returns sandbox id \"8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6\"" Jan 27 12:54:14.423048 kubelet[2768]: E0127 12:54:14.422976 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:54:14.431179 containerd[1598]: time="2026-01-27T12:54:14.431132685Z" level=info msg="CreateContainer within sandbox \"8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 27 12:54:14.452833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3306929535.mount: Deactivated successfully. Jan 27 12:54:14.458051 containerd[1598]: time="2026-01-27T12:54:14.457668845Z" level=info msg="Container 4b804eac335d98bcb4b538e567312a9c4dece63512291f8a0d1bd9cacf22b410: CDI devices from CRI Config.CDIDevices: []" Jan 27 12:54:14.467658 containerd[1598]: time="2026-01-27T12:54:14.467556299Z" level=info msg="CreateContainer within sandbox \"8427a1edc37c792ac89bf5869f33b3ad654580d16ab3bf3962c2e8c38ff6c4e6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4b804eac335d98bcb4b538e567312a9c4dece63512291f8a0d1bd9cacf22b410\"" Jan 27 12:54:14.469038 containerd[1598]: time="2026-01-27T12:54:14.468548746Z" level=info msg="StartContainer for \"4b804eac335d98bcb4b538e567312a9c4dece63512291f8a0d1bd9cacf22b410\"" Jan 27 12:54:14.470219 containerd[1598]: time="2026-01-27T12:54:14.470190751Z" level=info msg="connecting to shim 4b804eac335d98bcb4b538e567312a9c4dece63512291f8a0d1bd9cacf22b410" address="unix:///run/containerd/s/12426799c287cf2422c403a2c8563e64af7ce1895eeca669a48f637452ea6d10" protocol=ttrpc version=3 Jan 27 12:54:14.506208 systemd[1]: Started cri-containerd-4b804eac335d98bcb4b538e567312a9c4dece63512291f8a0d1bd9cacf22b410.scope - libcontainer container 4b804eac335d98bcb4b538e567312a9c4dece63512291f8a0d1bd9cacf22b410. Jan 27 12:54:14.533000 audit: BPF prog-id=221 op=LOAD Jan 27 12:54:14.534000 audit: BPF prog-id=222 op=LOAD Jan 27 12:54:14.534000 audit[4424]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4386 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:14.534000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462383034656163333335643938626362346235333865353637333132 Jan 27 12:54:14.534000 audit: BPF prog-id=222 op=UNLOAD Jan 27 12:54:14.534000 audit[4424]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4386 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:14.534000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462383034656163333335643938626362346235333865353637333132 Jan 27 12:54:14.535000 audit: BPF prog-id=223 op=LOAD Jan 27 12:54:14.535000 audit[4424]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4386 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:14.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462383034656163333335643938626362346235333865353637333132 Jan 27 12:54:14.535000 audit: BPF prog-id=224 op=LOAD Jan 27 12:54:14.535000 audit[4424]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4386 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:14.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462383034656163333335643938626362346235333865353637333132 Jan 27 12:54:14.535000 audit: BPF prog-id=224 op=UNLOAD Jan 27 12:54:14.535000 audit[4424]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4386 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:14.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462383034656163333335643938626362346235333865353637333132 Jan 27 12:54:14.535000 audit: BPF prog-id=223 op=UNLOAD Jan 27 12:54:14.535000 audit[4424]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4386 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:14.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462383034656163333335643938626362346235333865353637333132 Jan 27 12:54:14.535000 audit: BPF prog-id=225 op=LOAD Jan 27 12:54:14.535000 audit[4424]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4386 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:14.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462383034656163333335643938626362346235333865353637333132 Jan 27 12:54:14.573513 containerd[1598]: time="2026-01-27T12:54:14.573396673Z" level=info msg="StartContainer for \"4b804eac335d98bcb4b538e567312a9c4dece63512291f8a0d1bd9cacf22b410\" returns successfully" Jan 27 12:54:14.591825 systemd-networkd[1508]: vxlan.calico: Gained IPv6LL Jan 27 12:54:14.999336 containerd[1598]: time="2026-01-27T12:54:14.999217347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df48b7979-cgdx9,Uid:e6d3c258-6f1e-4868-8f36-862014b4b2fc,Namespace:calico-apiserver,Attempt:0,}" Jan 27 12:54:15.218215 systemd-networkd[1508]: cali6e6ed48afc1: Link UP Jan 27 12:54:15.219862 systemd-networkd[1508]: cali6e6ed48afc1: Gained carrier Jan 27 12:54:15.248207 containerd[1598]: 2026-01-27 12:54:15.063 [INFO][4459] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6df48b7979--cgdx9-eth0 calico-apiserver-6df48b7979- calico-apiserver e6d3c258-6f1e-4868-8f36-862014b4b2fc 825 0 2026-01-27 12:53:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6df48b7979 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6df48b7979-cgdx9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6e6ed48afc1 [] [] }} ContainerID="179361ad9081ca3ea5f696967b174a08f676fc3dd9a73aec5c488d59d88ade90" Namespace="calico-apiserver" Pod="calico-apiserver-6df48b7979-cgdx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df48b7979--cgdx9-" Jan 27 12:54:15.248207 containerd[1598]: 2026-01-27 12:54:15.063 [INFO][4459] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="179361ad9081ca3ea5f696967b174a08f676fc3dd9a73aec5c488d59d88ade90" Namespace="calico-apiserver" Pod="calico-apiserver-6df48b7979-cgdx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df48b7979--cgdx9-eth0" Jan 27 12:54:15.248207 containerd[1598]: 2026-01-27 12:54:15.109 [INFO][4473] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="179361ad9081ca3ea5f696967b174a08f676fc3dd9a73aec5c488d59d88ade90" HandleID="k8s-pod-network.179361ad9081ca3ea5f696967b174a08f676fc3dd9a73aec5c488d59d88ade90" Workload="localhost-k8s-calico--apiserver--6df48b7979--cgdx9-eth0" Jan 27 12:54:15.248207 containerd[1598]: 2026-01-27 12:54:15.110 [INFO][4473] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="179361ad9081ca3ea5f696967b174a08f676fc3dd9a73aec5c488d59d88ade90" HandleID="k8s-pod-network.179361ad9081ca3ea5f696967b174a08f676fc3dd9a73aec5c488d59d88ade90" Workload="localhost-k8s-calico--apiserver--6df48b7979--cgdx9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e320), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6df48b7979-cgdx9", "timestamp":"2026-01-27 12:54:15.1096646 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 12:54:15.248207 containerd[1598]: 2026-01-27 12:54:15.110 [INFO][4473] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 12:54:15.248207 containerd[1598]: 2026-01-27 12:54:15.110 [INFO][4473] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 12:54:15.248207 containerd[1598]: 2026-01-27 12:54:15.110 [INFO][4473] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 27 12:54:15.248207 containerd[1598]: 2026-01-27 12:54:15.122 [INFO][4473] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.179361ad9081ca3ea5f696967b174a08f676fc3dd9a73aec5c488d59d88ade90" host="localhost" Jan 27 12:54:15.248207 containerd[1598]: 2026-01-27 12:54:15.132 [INFO][4473] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 27 12:54:15.248207 containerd[1598]: 2026-01-27 12:54:15.140 [INFO][4473] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 27 12:54:15.248207 containerd[1598]: 2026-01-27 12:54:15.143 [INFO][4473] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 27 12:54:15.248207 containerd[1598]: 2026-01-27 12:54:15.147 [INFO][4473] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 27 12:54:15.248207 containerd[1598]: 2026-01-27 12:54:15.147 [INFO][4473] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.179361ad9081ca3ea5f696967b174a08f676fc3dd9a73aec5c488d59d88ade90" host="localhost" Jan 27 12:54:15.248207 containerd[1598]: 2026-01-27 12:54:15.150 [INFO][4473] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.179361ad9081ca3ea5f696967b174a08f676fc3dd9a73aec5c488d59d88ade90 Jan 27 12:54:15.248207 containerd[1598]: 2026-01-27 12:54:15.156 [INFO][4473] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.179361ad9081ca3ea5f696967b174a08f676fc3dd9a73aec5c488d59d88ade90" host="localhost" Jan 27 12:54:15.248207 containerd[1598]: 2026-01-27 12:54:15.210 [INFO][4473] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.179361ad9081ca3ea5f696967b174a08f676fc3dd9a73aec5c488d59d88ade90" host="localhost" Jan 27 12:54:15.248207 containerd[1598]: 2026-01-27 12:54:15.210 [INFO][4473] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.179361ad9081ca3ea5f696967b174a08f676fc3dd9a73aec5c488d59d88ade90" host="localhost" Jan 27 12:54:15.248207 containerd[1598]: 2026-01-27 12:54:15.210 [INFO][4473] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 12:54:15.248207 containerd[1598]: 2026-01-27 12:54:15.210 [INFO][4473] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="179361ad9081ca3ea5f696967b174a08f676fc3dd9a73aec5c488d59d88ade90" HandleID="k8s-pod-network.179361ad9081ca3ea5f696967b174a08f676fc3dd9a73aec5c488d59d88ade90" Workload="localhost-k8s-calico--apiserver--6df48b7979--cgdx9-eth0" Jan 27 12:54:15.252514 containerd[1598]: 2026-01-27 12:54:15.214 [INFO][4459] cni-plugin/k8s.go 418: Populated endpoint ContainerID="179361ad9081ca3ea5f696967b174a08f676fc3dd9a73aec5c488d59d88ade90" Namespace="calico-apiserver" Pod="calico-apiserver-6df48b7979-cgdx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df48b7979--cgdx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6df48b7979--cgdx9-eth0", GenerateName:"calico-apiserver-6df48b7979-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6d3c258-6f1e-4868-8f36-862014b4b2fc", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 12, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df48b7979", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6df48b7979-cgdx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6e6ed48afc1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 12:54:15.252514 containerd[1598]: 2026-01-27 12:54:15.214 [INFO][4459] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="179361ad9081ca3ea5f696967b174a08f676fc3dd9a73aec5c488d59d88ade90" Namespace="calico-apiserver" Pod="calico-apiserver-6df48b7979-cgdx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df48b7979--cgdx9-eth0" Jan 27 12:54:15.252514 containerd[1598]: 2026-01-27 12:54:15.214 [INFO][4459] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e6ed48afc1 ContainerID="179361ad9081ca3ea5f696967b174a08f676fc3dd9a73aec5c488d59d88ade90" Namespace="calico-apiserver" Pod="calico-apiserver-6df48b7979-cgdx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df48b7979--cgdx9-eth0" Jan 27 12:54:15.252514 containerd[1598]: 2026-01-27 12:54:15.219 [INFO][4459] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="179361ad9081ca3ea5f696967b174a08f676fc3dd9a73aec5c488d59d88ade90" Namespace="calico-apiserver" Pod="calico-apiserver-6df48b7979-cgdx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df48b7979--cgdx9-eth0" Jan 27 12:54:15.252514 containerd[1598]: 2026-01-27 12:54:15.219 [INFO][4459] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="179361ad9081ca3ea5f696967b174a08f676fc3dd9a73aec5c488d59d88ade90" Namespace="calico-apiserver" Pod="calico-apiserver-6df48b7979-cgdx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df48b7979--cgdx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6df48b7979--cgdx9-eth0", GenerateName:"calico-apiserver-6df48b7979-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6d3c258-6f1e-4868-8f36-862014b4b2fc", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 12, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df48b7979", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"179361ad9081ca3ea5f696967b174a08f676fc3dd9a73aec5c488d59d88ade90", Pod:"calico-apiserver-6df48b7979-cgdx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6e6ed48afc1", MAC:"5a:93:49:96:46:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 12:54:15.252514 containerd[1598]: 2026-01-27 12:54:15.234 [INFO][4459] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="179361ad9081ca3ea5f696967b174a08f676fc3dd9a73aec5c488d59d88ade90" Namespace="calico-apiserver" Pod="calico-apiserver-6df48b7979-cgdx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df48b7979--cgdx9-eth0" Jan 27 12:54:15.255053 kubelet[2768]: E0127 12:54:15.253411 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:54:15.260231 kubelet[2768]: E0127 12:54:15.260161 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5vwvj" podUID="6af69036-827e-49bb-8e7c-3940b856830f" Jan 27 12:54:15.286000 audit[4488]: NETFILTER_CFG table=filter:125 family=2 entries=58 op=nft_register_chain pid=4488 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 12:54:15.286000 audit[4488]: SYSCALL arch=c000003e syscall=46 success=yes exit=30584 a0=3 a1=7fff60116c40 a2=0 a3=7fff60116c2c items=0 ppid=4072 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:15.286000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 12:54:15.309377 containerd[1598]: time="2026-01-27T12:54:15.309254193Z" level=info msg="connecting to shim 179361ad9081ca3ea5f696967b174a08f676fc3dd9a73aec5c488d59d88ade90" address="unix:///run/containerd/s/0dc1189d48178e08d824c18e25338241e505d847668121ea97e0a4ebcd1cb6a2" namespace=k8s.io protocol=ttrpc version=3 Jan 27 12:54:15.310232 kubelet[2768]: I0127 12:54:15.309453 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rj6rv" podStartSLOduration=36.309430048 podStartE2EDuration="36.309430048s" podCreationTimestamp="2026-01-27 12:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:54:15.277306323 +0000 UTC m=+42.451216754" watchObservedRunningTime="2026-01-27 12:54:15.309430048 +0000 UTC m=+42.483340478" Jan 27 12:54:15.324000 audit[4505]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=4505 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:54:15.324000 audit[4505]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe26231ac0 a2=0 a3=7ffe26231aac items=0 ppid=2925 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:15.324000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:54:15.338000 audit[4505]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=4505 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:54:15.338000 audit[4505]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe26231ac0 a2=0 a3=0 items=0 ppid=2925 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:15.338000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:54:15.364029 systemd-networkd[1508]: cali9fd623959c3: Gained IPv6LL Jan 27 12:54:15.391178 systemd[1]: Started cri-containerd-179361ad9081ca3ea5f696967b174a08f676fc3dd9a73aec5c488d59d88ade90.scope - libcontainer container 179361ad9081ca3ea5f696967b174a08f676fc3dd9a73aec5c488d59d88ade90. Jan 27 12:54:15.413000 audit: BPF prog-id=226 op=LOAD Jan 27 12:54:15.415000 audit: BPF prog-id=227 op=LOAD Jan 27 12:54:15.415000 audit[4510]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=4498 pid=4510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:15.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137393336316164393038316361336561356636393639363762313734 Jan 27 12:54:15.415000 audit: BPF prog-id=227 op=UNLOAD Jan 27 12:54:15.415000 audit[4510]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4498 pid=4510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:15.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137393336316164393038316361336561356636393639363762313734 Jan 27 12:54:15.415000 audit: BPF prog-id=228 op=LOAD Jan 27 12:54:15.415000 audit[4510]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4498 pid=4510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:15.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137393336316164393038316361336561356636393639363762313734 Jan 27 12:54:15.415000 audit: BPF prog-id=229 op=LOAD Jan 27 12:54:15.415000 audit[4510]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4498 pid=4510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:15.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137393336316164393038316361336561356636393639363762313734 Jan 27 12:54:15.415000 audit: BPF prog-id=229 op=UNLOAD Jan 27 12:54:15.415000 audit[4510]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4498 pid=4510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:15.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137393336316164393038316361336561356636393639363762313734 Jan 27 12:54:15.415000 audit: BPF prog-id=228 op=UNLOAD Jan 27 12:54:15.415000 audit[4510]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4498 pid=4510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:15.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137393336316164393038316361336561356636393639363762313734 Jan 27 12:54:15.415000 audit: BPF prog-id=230 op=LOAD Jan 27 12:54:15.415000 audit[4510]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4498 pid=4510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:15.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137393336316164393038316361336561356636393639363762313734 Jan 27 12:54:15.418491 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 27 12:54:15.473504 containerd[1598]: time="2026-01-27T12:54:15.473327222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df48b7979-cgdx9,Uid:e6d3c258-6f1e-4868-8f36-862014b4b2fc,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"179361ad9081ca3ea5f696967b174a08f676fc3dd9a73aec5c488d59d88ade90\"" Jan 27 12:54:15.475462 containerd[1598]: time="2026-01-27T12:54:15.475420094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 12:54:15.630644 containerd[1598]: time="2026-01-27T12:54:15.630549892Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:54:15.632784 containerd[1598]: time="2026-01-27T12:54:15.632733180Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 12:54:15.633060 containerd[1598]: time="2026-01-27T12:54:15.632780837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 12:54:15.633231 kubelet[2768]: E0127 12:54:15.633133 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 12:54:15.633231 kubelet[2768]: E0127 12:54:15.633217 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 12:54:15.633383 kubelet[2768]: E0127 12:54:15.633312 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6df48b7979-cgdx9_calico-apiserver(e6d3c258-6f1e-4868-8f36-862014b4b2fc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 12:54:15.633383 kubelet[2768]: E0127 12:54:15.633361 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-cgdx9" podUID="e6d3c258-6f1e-4868-8f36-862014b4b2fc" Jan 27 12:54:15.990317 containerd[1598]: time="2026-01-27T12:54:15.990026462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mtm9p,Uid:13a845a0-aaa5-4e80-8a2f-691163970ae8,Namespace:calico-system,Attempt:0,}" Jan 27 12:54:15.993233 containerd[1598]: time="2026-01-27T12:54:15.992884230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df48b7979-8w89r,Uid:32d8681f-2b1f-4fad-bc6d-7656e61dae7d,Namespace:calico-apiserver,Attempt:0,}" Jan 27 12:54:15.994666 containerd[1598]: time="2026-01-27T12:54:15.994638159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d95ff6778-flxqp,Uid:518046d9-b7bc-493b-96b2-44b9979317ed,Namespace:calico-system,Attempt:0,}" Jan 27 12:54:16.232483 systemd-networkd[1508]: cali2a8a919100c: Link UP Jan 27 12:54:16.236228 systemd-networkd[1508]: cali2a8a919100c: Gained carrier Jan 27 12:54:16.259024 kubelet[2768]: E0127 12:54:16.258550 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:54:16.261465 kubelet[2768]: E0127 12:54:16.261366 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-cgdx9" podUID="e6d3c258-6f1e-4868-8f36-862014b4b2fc" Jan 27 12:54:16.263625 containerd[1598]: 2026-01-27 12:54:16.072 [INFO][4534] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--mtm9p-eth0 goldmane-7c778bb748- calico-system 13a845a0-aaa5-4e80-8a2f-691163970ae8 822 0 2026-01-27 12:53:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-mtm9p eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2a8a919100c [] [] }} ContainerID="af43f7a94e2f6d111ad792dabb2553d8ce0d4fd60cd4f3babf31c8886d4068f1" Namespace="calico-system" Pod="goldmane-7c778bb748-mtm9p" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--mtm9p-" Jan 27 12:54:16.263625 containerd[1598]: 2026-01-27 12:54:16.072 [INFO][4534] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af43f7a94e2f6d111ad792dabb2553d8ce0d4fd60cd4f3babf31c8886d4068f1" Namespace="calico-system" Pod="goldmane-7c778bb748-mtm9p" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--mtm9p-eth0" Jan 27 12:54:16.263625 containerd[1598]: 2026-01-27 12:54:16.134 [INFO][4579] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af43f7a94e2f6d111ad792dabb2553d8ce0d4fd60cd4f3babf31c8886d4068f1" HandleID="k8s-pod-network.af43f7a94e2f6d111ad792dabb2553d8ce0d4fd60cd4f3babf31c8886d4068f1" Workload="localhost-k8s-goldmane--7c778bb748--mtm9p-eth0" Jan 27 12:54:16.263625 containerd[1598]: 2026-01-27 12:54:16.135 [INFO][4579] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="af43f7a94e2f6d111ad792dabb2553d8ce0d4fd60cd4f3babf31c8886d4068f1" HandleID="k8s-pod-network.af43f7a94e2f6d111ad792dabb2553d8ce0d4fd60cd4f3babf31c8886d4068f1" Workload="localhost-k8s-goldmane--7c778bb748--mtm9p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001394b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-mtm9p", "timestamp":"2026-01-27 12:54:16.134875937 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 12:54:16.263625 containerd[1598]: 2026-01-27 12:54:16.135 [INFO][4579] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 12:54:16.263625 containerd[1598]: 2026-01-27 12:54:16.135 [INFO][4579] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 12:54:16.263625 containerd[1598]: 2026-01-27 12:54:16.135 [INFO][4579] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 27 12:54:16.263625 containerd[1598]: 2026-01-27 12:54:16.147 [INFO][4579] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af43f7a94e2f6d111ad792dabb2553d8ce0d4fd60cd4f3babf31c8886d4068f1" host="localhost" Jan 27 12:54:16.263625 containerd[1598]: 2026-01-27 12:54:16.155 [INFO][4579] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 27 12:54:16.263625 containerd[1598]: 2026-01-27 12:54:16.199 [INFO][4579] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 27 12:54:16.263625 containerd[1598]: 2026-01-27 12:54:16.202 [INFO][4579] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 27 12:54:16.263625 containerd[1598]: 2026-01-27 12:54:16.206 [INFO][4579] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 27 12:54:16.263625 containerd[1598]: 2026-01-27 12:54:16.206 [INFO][4579] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.af43f7a94e2f6d111ad792dabb2553d8ce0d4fd60cd4f3babf31c8886d4068f1" host="localhost" Jan 27 12:54:16.263625 containerd[1598]: 2026-01-27 12:54:16.208 [INFO][4579] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.af43f7a94e2f6d111ad792dabb2553d8ce0d4fd60cd4f3babf31c8886d4068f1 Jan 27 12:54:16.263625 containerd[1598]: 2026-01-27 12:54:16.215 [INFO][4579] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.af43f7a94e2f6d111ad792dabb2553d8ce0d4fd60cd4f3babf31c8886d4068f1" host="localhost" Jan 27 12:54:16.263625 containerd[1598]: 2026-01-27 12:54:16.224 [INFO][4579] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.af43f7a94e2f6d111ad792dabb2553d8ce0d4fd60cd4f3babf31c8886d4068f1" host="localhost" Jan 27 12:54:16.263625 containerd[1598]: 2026-01-27 12:54:16.224 [INFO][4579] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.af43f7a94e2f6d111ad792dabb2553d8ce0d4fd60cd4f3babf31c8886d4068f1" host="localhost" Jan 27 12:54:16.263625 containerd[1598]: 2026-01-27 12:54:16.224 [INFO][4579] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 12:54:16.263625 containerd[1598]: 2026-01-27 12:54:16.224 [INFO][4579] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="af43f7a94e2f6d111ad792dabb2553d8ce0d4fd60cd4f3babf31c8886d4068f1" HandleID="k8s-pod-network.af43f7a94e2f6d111ad792dabb2553d8ce0d4fd60cd4f3babf31c8886d4068f1" Workload="localhost-k8s-goldmane--7c778bb748--mtm9p-eth0" Jan 27 12:54:16.266427 containerd[1598]: 2026-01-27 12:54:16.227 [INFO][4534] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af43f7a94e2f6d111ad792dabb2553d8ce0d4fd60cd4f3babf31c8886d4068f1" Namespace="calico-system" Pod="goldmane-7c778bb748-mtm9p" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--mtm9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--mtm9p-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"13a845a0-aaa5-4e80-8a2f-691163970ae8", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 12, 53, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-mtm9p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2a8a919100c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 12:54:16.266427 containerd[1598]: 2026-01-27 12:54:16.228 [INFO][4534] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="af43f7a94e2f6d111ad792dabb2553d8ce0d4fd60cd4f3babf31c8886d4068f1" Namespace="calico-system" Pod="goldmane-7c778bb748-mtm9p" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--mtm9p-eth0" Jan 27 12:54:16.266427 containerd[1598]: 2026-01-27 12:54:16.228 [INFO][4534] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a8a919100c ContainerID="af43f7a94e2f6d111ad792dabb2553d8ce0d4fd60cd4f3babf31c8886d4068f1" Namespace="calico-system" Pod="goldmane-7c778bb748-mtm9p" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--mtm9p-eth0" Jan 27 12:54:16.266427 containerd[1598]: 2026-01-27 12:54:16.235 [INFO][4534] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af43f7a94e2f6d111ad792dabb2553d8ce0d4fd60cd4f3babf31c8886d4068f1" Namespace="calico-system" Pod="goldmane-7c778bb748-mtm9p" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--mtm9p-eth0" Jan 27 12:54:16.266427 containerd[1598]: 2026-01-27 12:54:16.237 [INFO][4534] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af43f7a94e2f6d111ad792dabb2553d8ce0d4fd60cd4f3babf31c8886d4068f1" Namespace="calico-system" Pod="goldmane-7c778bb748-mtm9p" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--mtm9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--mtm9p-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"13a845a0-aaa5-4e80-8a2f-691163970ae8", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 12, 53, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af43f7a94e2f6d111ad792dabb2553d8ce0d4fd60cd4f3babf31c8886d4068f1", Pod:"goldmane-7c778bb748-mtm9p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2a8a919100c", MAC:"1e:5f:91:48:38:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 12:54:16.266427 containerd[1598]: 2026-01-27 12:54:16.253 [INFO][4534] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af43f7a94e2f6d111ad792dabb2553d8ce0d4fd60cd4f3babf31c8886d4068f1" Namespace="calico-system" Pod="goldmane-7c778bb748-mtm9p" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--mtm9p-eth0" Jan 27 12:54:16.301000 audit[4615]: NETFILTER_CFG table=filter:128 family=2 entries=56 op=nft_register_chain pid=4615 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 12:54:16.301000 audit[4615]: SYSCALL arch=c000003e syscall=46 success=yes exit=28744 a0=3 a1=7fffd2504aa0 a2=0 a3=7fffd2504a8c items=0 ppid=4072 pid=4615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.301000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 12:54:16.314112 containerd[1598]: time="2026-01-27T12:54:16.314026639Z" level=info msg="connecting to shim af43f7a94e2f6d111ad792dabb2553d8ce0d4fd60cd4f3babf31c8886d4068f1" address="unix:///run/containerd/s/06cadd131822153c3b5f0225fb41469650b3fd5855213aef597e2f5879d2bbcc" namespace=k8s.io protocol=ttrpc version=3 Jan 27 12:54:16.327030 systemd-networkd[1508]: calic4bc24f12c3: Link UP Jan 27 12:54:16.329867 systemd-networkd[1508]: calic4bc24f12c3: Gained carrier Jan 27 12:54:16.356990 containerd[1598]: 2026-01-27 12:54:16.073 [INFO][4536] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6df48b7979--8w89r-eth0 calico-apiserver-6df48b7979- calico-apiserver 32d8681f-2b1f-4fad-bc6d-7656e61dae7d 827 0 2026-01-27 12:53:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6df48b7979 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6df48b7979-8w89r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic4bc24f12c3 [] [] }} ContainerID="c93024474c6fa1df5365c4a04e3a34cdd378e8013a620e0c0aff5b6e0135d3ba" Namespace="calico-apiserver" Pod="calico-apiserver-6df48b7979-8w89r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df48b7979--8w89r-" Jan 27 12:54:16.356990 containerd[1598]: 2026-01-27 12:54:16.074 [INFO][4536] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c93024474c6fa1df5365c4a04e3a34cdd378e8013a620e0c0aff5b6e0135d3ba" Namespace="calico-apiserver" Pod="calico-apiserver-6df48b7979-8w89r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df48b7979--8w89r-eth0" Jan 27 12:54:16.356990 containerd[1598]: 2026-01-27 12:54:16.149 [INFO][4585] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c93024474c6fa1df5365c4a04e3a34cdd378e8013a620e0c0aff5b6e0135d3ba" HandleID="k8s-pod-network.c93024474c6fa1df5365c4a04e3a34cdd378e8013a620e0c0aff5b6e0135d3ba" Workload="localhost-k8s-calico--apiserver--6df48b7979--8w89r-eth0" Jan 27 12:54:16.356990 containerd[1598]: 2026-01-27 12:54:16.150 [INFO][4585] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c93024474c6fa1df5365c4a04e3a34cdd378e8013a620e0c0aff5b6e0135d3ba" HandleID="k8s-pod-network.c93024474c6fa1df5365c4a04e3a34cdd378e8013a620e0c0aff5b6e0135d3ba" Workload="localhost-k8s-calico--apiserver--6df48b7979--8w89r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f6a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6df48b7979-8w89r", "timestamp":"2026-01-27 12:54:16.149532868 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 12:54:16.356990 containerd[1598]: 2026-01-27 12:54:16.150 [INFO][4585] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 12:54:16.356990 containerd[1598]: 2026-01-27 12:54:16.224 [INFO][4585] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 12:54:16.356990 containerd[1598]: 2026-01-27 12:54:16.224 [INFO][4585] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 27 12:54:16.356990 containerd[1598]: 2026-01-27 12:54:16.246 [INFO][4585] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c93024474c6fa1df5365c4a04e3a34cdd378e8013a620e0c0aff5b6e0135d3ba" host="localhost" Jan 27 12:54:16.356990 containerd[1598]: 2026-01-27 12:54:16.261 [INFO][4585] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 27 12:54:16.356990 containerd[1598]: 2026-01-27 12:54:16.282 [INFO][4585] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 27 12:54:16.356990 containerd[1598]: 2026-01-27 12:54:16.289 [INFO][4585] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 27 12:54:16.356990 containerd[1598]: 2026-01-27 12:54:16.294 [INFO][4585] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 27 12:54:16.356990 containerd[1598]: 2026-01-27 12:54:16.294 [INFO][4585] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c93024474c6fa1df5365c4a04e3a34cdd378e8013a620e0c0aff5b6e0135d3ba" host="localhost" Jan 27 12:54:16.356990 containerd[1598]: 2026-01-27 12:54:16.299 [INFO][4585] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c93024474c6fa1df5365c4a04e3a34cdd378e8013a620e0c0aff5b6e0135d3ba Jan 27 12:54:16.356990 containerd[1598]: 2026-01-27 12:54:16.306 [INFO][4585] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c93024474c6fa1df5365c4a04e3a34cdd378e8013a620e0c0aff5b6e0135d3ba" host="localhost" Jan 27 12:54:16.356990 containerd[1598]: 2026-01-27 12:54:16.317 [INFO][4585] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c93024474c6fa1df5365c4a04e3a34cdd378e8013a620e0c0aff5b6e0135d3ba" host="localhost" Jan 27 12:54:16.356990 containerd[1598]: 2026-01-27 12:54:16.317 [INFO][4585] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c93024474c6fa1df5365c4a04e3a34cdd378e8013a620e0c0aff5b6e0135d3ba" host="localhost" Jan 27 12:54:16.356990 containerd[1598]: 2026-01-27 12:54:16.317 [INFO][4585] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 12:54:16.356990 containerd[1598]: 2026-01-27 12:54:16.317 [INFO][4585] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c93024474c6fa1df5365c4a04e3a34cdd378e8013a620e0c0aff5b6e0135d3ba" HandleID="k8s-pod-network.c93024474c6fa1df5365c4a04e3a34cdd378e8013a620e0c0aff5b6e0135d3ba" Workload="localhost-k8s-calico--apiserver--6df48b7979--8w89r-eth0" Jan 27 12:54:16.357627 containerd[1598]: 2026-01-27 12:54:16.322 [INFO][4536] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c93024474c6fa1df5365c4a04e3a34cdd378e8013a620e0c0aff5b6e0135d3ba" Namespace="calico-apiserver" Pod="calico-apiserver-6df48b7979-8w89r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df48b7979--8w89r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6df48b7979--8w89r-eth0", GenerateName:"calico-apiserver-6df48b7979-", Namespace:"calico-apiserver", SelfLink:"", UID:"32d8681f-2b1f-4fad-bc6d-7656e61dae7d", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 12, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df48b7979", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6df48b7979-8w89r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic4bc24f12c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 12:54:16.357627 containerd[1598]: 2026-01-27 12:54:16.322 [INFO][4536] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c93024474c6fa1df5365c4a04e3a34cdd378e8013a620e0c0aff5b6e0135d3ba" Namespace="calico-apiserver" Pod="calico-apiserver-6df48b7979-8w89r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df48b7979--8w89r-eth0" Jan 27 12:54:16.357627 containerd[1598]: 2026-01-27 12:54:16.322 [INFO][4536] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4bc24f12c3 ContainerID="c93024474c6fa1df5365c4a04e3a34cdd378e8013a620e0c0aff5b6e0135d3ba" Namespace="calico-apiserver" Pod="calico-apiserver-6df48b7979-8w89r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df48b7979--8w89r-eth0" Jan 27 12:54:16.357627 containerd[1598]: 2026-01-27 12:54:16.331 [INFO][4536] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c93024474c6fa1df5365c4a04e3a34cdd378e8013a620e0c0aff5b6e0135d3ba" Namespace="calico-apiserver" Pod="calico-apiserver-6df48b7979-8w89r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df48b7979--8w89r-eth0" Jan 27 12:54:16.357627 containerd[1598]: 2026-01-27 12:54:16.332 [INFO][4536] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c93024474c6fa1df5365c4a04e3a34cdd378e8013a620e0c0aff5b6e0135d3ba" Namespace="calico-apiserver" Pod="calico-apiserver-6df48b7979-8w89r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df48b7979--8w89r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6df48b7979--8w89r-eth0", GenerateName:"calico-apiserver-6df48b7979-", Namespace:"calico-apiserver", SelfLink:"", UID:"32d8681f-2b1f-4fad-bc6d-7656e61dae7d", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 12, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df48b7979", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c93024474c6fa1df5365c4a04e3a34cdd378e8013a620e0c0aff5b6e0135d3ba", Pod:"calico-apiserver-6df48b7979-8w89r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic4bc24f12c3", MAC:"c2:c5:75:fa:7e:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 12:54:16.357627 containerd[1598]: 2026-01-27 12:54:16.350 [INFO][4536] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c93024474c6fa1df5365c4a04e3a34cdd378e8013a620e0c0aff5b6e0135d3ba" Namespace="calico-apiserver" Pod="calico-apiserver-6df48b7979-8w89r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df48b7979--8w89r-eth0" Jan 27 12:54:16.381396 systemd[1]: Started cri-containerd-af43f7a94e2f6d111ad792dabb2553d8ce0d4fd60cd4f3babf31c8886d4068f1.scope - libcontainer container af43f7a94e2f6d111ad792dabb2553d8ce0d4fd60cd4f3babf31c8886d4068f1. Jan 27 12:54:16.402000 audit[4660]: NETFILTER_CFG table=filter:129 family=2 entries=17 op=nft_register_rule pid=4660 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:54:16.402000 audit[4660]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc849af3b0 a2=0 a3=7ffc849af39c items=0 ppid=2925 pid=4660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.402000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:54:16.411000 audit[4660]: NETFILTER_CFG table=nat:130 family=2 entries=35 op=nft_register_chain pid=4660 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:54:16.411000 audit[4660]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc849af3b0 a2=0 a3=7ffc849af39c items=0 ppid=2925 pid=4660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.411000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:54:16.432291 containerd[1598]: time="2026-01-27T12:54:16.432197810Z" level=info msg="connecting to shim c93024474c6fa1df5365c4a04e3a34cdd378e8013a620e0c0aff5b6e0135d3ba" address="unix:///run/containerd/s/28c47f16d997c7516e44a60b1c27adaaa2067bd4875e3822b30574669350efba" namespace=k8s.io protocol=ttrpc version=3 Jan 27 12:54:16.444000 audit: BPF prog-id=231 op=LOAD Jan 27 12:54:16.445000 audit[4675]: NETFILTER_CFG table=filter:131 family=2 entries=53 op=nft_register_chain pid=4675 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 12:54:16.445000 audit[4675]: SYSCALL arch=c000003e syscall=46 success=yes exit=26640 a0=3 a1=7fffad325b80 a2=0 a3=7fffad325b6c items=0 ppid=4072 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.445000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 12:54:16.449000 audit: BPF prog-id=232 op=LOAD Jan 27 12:54:16.449000 audit[4636]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f4238 a2=98 a3=0 items=0 ppid=4624 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166343366376139346532663664313131616437393264616262323535 Jan 27 12:54:16.452000 audit: BPF prog-id=232 op=UNLOAD Jan 27 12:54:16.452000 audit[4636]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4624 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166343366376139346532663664313131616437393264616262323535 Jan 27 12:54:16.453000 audit: BPF prog-id=233 op=LOAD Jan 27 12:54:16.453000 audit[4636]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f4488 a2=98 a3=0 items=0 ppid=4624 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166343366376139346532663664313131616437393264616262323535 Jan 27 12:54:16.453000 audit: BPF prog-id=234 op=LOAD Jan 27 12:54:16.453000 audit[4636]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001f4218 a2=98 a3=0 items=0 ppid=4624 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166343366376139346532663664313131616437393264616262323535 Jan 27 12:54:16.453000 audit: BPF prog-id=234 op=UNLOAD Jan 27 12:54:16.453000 audit[4636]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4624 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166343366376139346532663664313131616437393264616262323535 Jan 27 12:54:16.453000 audit: BPF prog-id=233 op=UNLOAD Jan 27 12:54:16.453000 audit[4636]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4624 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166343366376139346532663664313131616437393264616262323535 Jan 27 12:54:16.458000 audit: BPF prog-id=235 op=LOAD Jan 27 12:54:16.458000 audit[4636]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f46e8 a2=98 a3=0 items=0 ppid=4624 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166343366376139346532663664313131616437393264616262323535 Jan 27 12:54:16.465249 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 27 12:54:16.482571 systemd-networkd[1508]: calid3517555358: Link UP Jan 27 12:54:16.488230 systemd-networkd[1508]: calid3517555358: Gained carrier Jan 27 12:54:16.511464 systemd-networkd[1508]: cali6e6ed48afc1: Gained IPv6LL Jan 27 12:54:16.539308 containerd[1598]: 2026-01-27 12:54:16.101 [INFO][4559] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5d95ff6778--flxqp-eth0 calico-kube-controllers-5d95ff6778- calico-system 518046d9-b7bc-493b-96b2-44b9979317ed 829 0 2026-01-27 12:53:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d95ff6778 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5d95ff6778-flxqp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid3517555358 [] [] }} ContainerID="1ce211dc090d3ec8862a98522bd7604465cf68f126b72dec165b37e0a7da6bc8" Namespace="calico-system" Pod="calico-kube-controllers-5d95ff6778-flxqp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d95ff6778--flxqp-" Jan 27 12:54:16.539308 containerd[1598]: 2026-01-27 12:54:16.101 [INFO][4559] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1ce211dc090d3ec8862a98522bd7604465cf68f126b72dec165b37e0a7da6bc8" Namespace="calico-system" Pod="calico-kube-controllers-5d95ff6778-flxqp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d95ff6778--flxqp-eth0" Jan 27 12:54:16.539308 containerd[1598]: 2026-01-27 12:54:16.154 [INFO][4594] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ce211dc090d3ec8862a98522bd7604465cf68f126b72dec165b37e0a7da6bc8" HandleID="k8s-pod-network.1ce211dc090d3ec8862a98522bd7604465cf68f126b72dec165b37e0a7da6bc8" Workload="localhost-k8s-calico--kube--controllers--5d95ff6778--flxqp-eth0" Jan 27 12:54:16.539308 containerd[1598]: 2026-01-27 12:54:16.155 [INFO][4594] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1ce211dc090d3ec8862a98522bd7604465cf68f126b72dec165b37e0a7da6bc8" HandleID="k8s-pod-network.1ce211dc090d3ec8862a98522bd7604465cf68f126b72dec165b37e0a7da6bc8" Workload="localhost-k8s-calico--kube--controllers--5d95ff6778--flxqp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e3940), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5d95ff6778-flxqp", "timestamp":"2026-01-27 12:54:16.154818777 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 12:54:16.539308 containerd[1598]: 2026-01-27 12:54:16.155 [INFO][4594] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 12:54:16.539308 containerd[1598]: 2026-01-27 12:54:16.317 [INFO][4594] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 12:54:16.539308 containerd[1598]: 2026-01-27 12:54:16.318 [INFO][4594] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 27 12:54:16.539308 containerd[1598]: 2026-01-27 12:54:16.349 [INFO][4594] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1ce211dc090d3ec8862a98522bd7604465cf68f126b72dec165b37e0a7da6bc8" host="localhost" Jan 27 12:54:16.539308 containerd[1598]: 2026-01-27 12:54:16.363 [INFO][4594] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 27 12:54:16.539308 containerd[1598]: 2026-01-27 12:54:16.380 [INFO][4594] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 27 12:54:16.539308 containerd[1598]: 2026-01-27 12:54:16.388 [INFO][4594] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 27 12:54:16.539308 containerd[1598]: 2026-01-27 12:54:16.395 [INFO][4594] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 27 12:54:16.539308 containerd[1598]: 2026-01-27 12:54:16.395 [INFO][4594] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1ce211dc090d3ec8862a98522bd7604465cf68f126b72dec165b37e0a7da6bc8" host="localhost" Jan 27 12:54:16.539308 containerd[1598]: 2026-01-27 12:54:16.407 [INFO][4594] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1ce211dc090d3ec8862a98522bd7604465cf68f126b72dec165b37e0a7da6bc8 Jan 27 12:54:16.539308 containerd[1598]: 2026-01-27 12:54:16.421 [INFO][4594] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1ce211dc090d3ec8862a98522bd7604465cf68f126b72dec165b37e0a7da6bc8" host="localhost" Jan 27 12:54:16.539308 containerd[1598]: 2026-01-27 12:54:16.439 [INFO][4594] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.1ce211dc090d3ec8862a98522bd7604465cf68f126b72dec165b37e0a7da6bc8" host="localhost" Jan 27 12:54:16.539308 containerd[1598]: 2026-01-27 12:54:16.439 [INFO][4594] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.1ce211dc090d3ec8862a98522bd7604465cf68f126b72dec165b37e0a7da6bc8" host="localhost" Jan 27 12:54:16.539308 containerd[1598]: 2026-01-27 12:54:16.439 [INFO][4594] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 12:54:16.539308 containerd[1598]: 2026-01-27 12:54:16.439 [INFO][4594] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="1ce211dc090d3ec8862a98522bd7604465cf68f126b72dec165b37e0a7da6bc8" HandleID="k8s-pod-network.1ce211dc090d3ec8862a98522bd7604465cf68f126b72dec165b37e0a7da6bc8" Workload="localhost-k8s-calico--kube--controllers--5d95ff6778--flxqp-eth0" Jan 27 12:54:16.540306 containerd[1598]: 2026-01-27 12:54:16.448 [INFO][4559] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1ce211dc090d3ec8862a98522bd7604465cf68f126b72dec165b37e0a7da6bc8" Namespace="calico-system" Pod="calico-kube-controllers-5d95ff6778-flxqp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d95ff6778--flxqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d95ff6778--flxqp-eth0", GenerateName:"calico-kube-controllers-5d95ff6778-", Namespace:"calico-system", SelfLink:"", UID:"518046d9-b7bc-493b-96b2-44b9979317ed", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 12, 53, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d95ff6778", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5d95ff6778-flxqp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3517555358", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 12:54:16.540306 containerd[1598]: 2026-01-27 12:54:16.448 [INFO][4559] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="1ce211dc090d3ec8862a98522bd7604465cf68f126b72dec165b37e0a7da6bc8" Namespace="calico-system" Pod="calico-kube-controllers-5d95ff6778-flxqp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d95ff6778--flxqp-eth0" Jan 27 12:54:16.540306 containerd[1598]: 2026-01-27 12:54:16.448 [INFO][4559] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3517555358 ContainerID="1ce211dc090d3ec8862a98522bd7604465cf68f126b72dec165b37e0a7da6bc8" Namespace="calico-system" Pod="calico-kube-controllers-5d95ff6778-flxqp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d95ff6778--flxqp-eth0" Jan 27 12:54:16.540306 containerd[1598]: 2026-01-27 12:54:16.483 [INFO][4559] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ce211dc090d3ec8862a98522bd7604465cf68f126b72dec165b37e0a7da6bc8" Namespace="calico-system" Pod="calico-kube-controllers-5d95ff6778-flxqp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d95ff6778--flxqp-eth0" Jan 27 12:54:16.540306 containerd[1598]: 2026-01-27 12:54:16.493 [INFO][4559] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1ce211dc090d3ec8862a98522bd7604465cf68f126b72dec165b37e0a7da6bc8" Namespace="calico-system" Pod="calico-kube-controllers-5d95ff6778-flxqp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d95ff6778--flxqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d95ff6778--flxqp-eth0", GenerateName:"calico-kube-controllers-5d95ff6778-", Namespace:"calico-system", SelfLink:"", UID:"518046d9-b7bc-493b-96b2-44b9979317ed", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 12, 53, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d95ff6778", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1ce211dc090d3ec8862a98522bd7604465cf68f126b72dec165b37e0a7da6bc8", Pod:"calico-kube-controllers-5d95ff6778-flxqp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3517555358", MAC:"0a:a2:cd:36:c1:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 12:54:16.540306 containerd[1598]: 2026-01-27 12:54:16.522 [INFO][4559] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1ce211dc090d3ec8862a98522bd7604465cf68f126b72dec165b37e0a7da6bc8" Namespace="calico-system" Pod="calico-kube-controllers-5d95ff6778-flxqp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d95ff6778--flxqp-eth0" Jan 27 12:54:16.549399 systemd[1]: Started cri-containerd-c93024474c6fa1df5365c4a04e3a34cdd378e8013a620e0c0aff5b6e0135d3ba.scope - libcontainer container c93024474c6fa1df5365c4a04e3a34cdd378e8013a620e0c0aff5b6e0135d3ba. Jan 27 12:54:16.579000 audit[4709]: NETFILTER_CFG table=filter:132 family=2 entries=56 op=nft_register_chain pid=4709 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 12:54:16.579000 audit[4709]: SYSCALL arch=c000003e syscall=46 success=yes exit=25516 a0=3 a1=7fff38168e40 a2=0 a3=7fff38168e2c items=0 ppid=4072 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.579000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 12:54:16.610036 containerd[1598]: time="2026-01-27T12:54:16.609143697Z" level=info msg="connecting to shim 1ce211dc090d3ec8862a98522bd7604465cf68f126b72dec165b37e0a7da6bc8" address="unix:///run/containerd/s/4e4ef7c2fdf333bf4dc8dc2df47d60d1ba37193e92093ecbcd49ba40c7a7556c" namespace=k8s.io protocol=ttrpc version=3 Jan 27 12:54:16.644000 audit: BPF prog-id=236 op=LOAD Jan 27 12:54:16.645000 audit: BPF prog-id=237 op=LOAD Jan 27 12:54:16.645000 audit[4688]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4676 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339333032343437346336666131646635333635633461303465336133 Jan 27 12:54:16.645000 audit: BPF prog-id=237 op=UNLOAD Jan 27 12:54:16.645000 audit[4688]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4676 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339333032343437346336666131646635333635633461303465336133 Jan 27 12:54:16.646000 audit: BPF prog-id=238 op=LOAD Jan 27 12:54:16.646000 audit[4688]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4676 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339333032343437346336666131646635333635633461303465336133 Jan 27 12:54:16.646000 audit: BPF prog-id=239 op=LOAD Jan 27 12:54:16.646000 audit[4688]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4676 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339333032343437346336666131646635333635633461303465336133 Jan 27 12:54:16.646000 audit: BPF prog-id=239 op=UNLOAD Jan 27 12:54:16.646000 audit[4688]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4676 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339333032343437346336666131646635333635633461303465336133 Jan 27 12:54:16.646000 audit: BPF prog-id=238 op=UNLOAD Jan 27 12:54:16.646000 audit[4688]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4676 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339333032343437346336666131646635333635633461303465336133 Jan 27 12:54:16.646000 audit: BPF prog-id=240 op=LOAD Jan 27 12:54:16.646000 audit[4688]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4676 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339333032343437346336666131646635333635633461303465336133 Jan 27 12:54:16.654025 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 27 12:54:16.690318 systemd[1]: Started cri-containerd-1ce211dc090d3ec8862a98522bd7604465cf68f126b72dec165b37e0a7da6bc8.scope - libcontainer container 1ce211dc090d3ec8862a98522bd7604465cf68f126b72dec165b37e0a7da6bc8. Jan 27 12:54:16.701812 containerd[1598]: time="2026-01-27T12:54:16.701641173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mtm9p,Uid:13a845a0-aaa5-4e80-8a2f-691163970ae8,Namespace:calico-system,Attempt:0,} returns sandbox id \"af43f7a94e2f6d111ad792dabb2553d8ce0d4fd60cd4f3babf31c8886d4068f1\"" Jan 27 12:54:16.709616 containerd[1598]: time="2026-01-27T12:54:16.709323214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 27 12:54:16.721000 audit: BPF prog-id=241 op=LOAD Jan 27 12:54:16.722000 audit: BPF prog-id=242 op=LOAD Jan 27 12:54:16.722000 audit[4734]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4723 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163653231316463303930643365633838363261393835323262643736 Jan 27 12:54:16.722000 audit: BPF prog-id=242 op=UNLOAD Jan 27 12:54:16.722000 audit[4734]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4723 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163653231316463303930643365633838363261393835323262643736 Jan 27 12:54:16.723000 audit: BPF prog-id=243 op=LOAD Jan 27 12:54:16.723000 audit[4734]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4723 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163653231316463303930643365633838363261393835323262643736 Jan 27 12:54:16.723000 audit: BPF prog-id=244 op=LOAD Jan 27 12:54:16.723000 audit[4734]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4723 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163653231316463303930643365633838363261393835323262643736 Jan 27 12:54:16.723000 audit: BPF prog-id=244 op=UNLOAD Jan 27 12:54:16.723000 audit[4734]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4723 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163653231316463303930643365633838363261393835323262643736 Jan 27 12:54:16.723000 audit: BPF prog-id=243 op=UNLOAD Jan 27 12:54:16.723000 audit[4734]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4723 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163653231316463303930643365633838363261393835323262643736 Jan 27 12:54:16.723000 audit: BPF prog-id=245 op=LOAD Jan 27 12:54:16.723000 audit[4734]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4723 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:16.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163653231316463303930643365633838363261393835323262643736 Jan 27 12:54:16.726060 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 27 12:54:16.785396 containerd[1598]: time="2026-01-27T12:54:16.785152662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df48b7979-8w89r,Uid:32d8681f-2b1f-4fad-bc6d-7656e61dae7d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c93024474c6fa1df5365c4a04e3a34cdd378e8013a620e0c0aff5b6e0135d3ba\"" Jan 27 12:54:16.815329 containerd[1598]: time="2026-01-27T12:54:16.815281523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d95ff6778-flxqp,Uid:518046d9-b7bc-493b-96b2-44b9979317ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"1ce211dc090d3ec8862a98522bd7604465cf68f126b72dec165b37e0a7da6bc8\"" Jan 27 12:54:16.999102 kubelet[2768]: E0127 12:54:16.999031 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:54:16.999973 containerd[1598]: time="2026-01-27T12:54:16.999675140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2flk2,Uid:e9a3713f-f0ca-48fe-b261-15054e0b1d7d,Namespace:kube-system,Attempt:0,}" Jan 27 12:54:17.122683 containerd[1598]: time="2026-01-27T12:54:17.122601927Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:54:17.124925 containerd[1598]: time="2026-01-27T12:54:17.124785996Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 27 12:54:17.125033 containerd[1598]: time="2026-01-27T12:54:17.124995868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 27 12:54:17.125360 kubelet[2768]: E0127 12:54:17.125251 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 12:54:17.125360 kubelet[2768]: E0127 12:54:17.125305 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 12:54:17.125801 kubelet[2768]: E0127 12:54:17.125646 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-mtm9p_calico-system(13a845a0-aaa5-4e80-8a2f-691163970ae8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 27 12:54:17.125801 kubelet[2768]: E0127 12:54:17.125778 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mtm9p" podUID="13a845a0-aaa5-4e80-8a2f-691163970ae8" Jan 27 12:54:17.127268 containerd[1598]: time="2026-01-27T12:54:17.127187770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 12:54:17.171577 systemd-networkd[1508]: cali68157f3b502: Link UP Jan 27 12:54:17.176584 systemd-networkd[1508]: cali68157f3b502: Gained carrier Jan 27 12:54:17.205747 containerd[1598]: 2026-01-27 12:54:17.057 [INFO][4775] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--2flk2-eth0 coredns-66bc5c9577- kube-system e9a3713f-f0ca-48fe-b261-15054e0b1d7d 826 0 2026-01-27 12:53:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-2flk2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali68157f3b502 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784" Namespace="kube-system" Pod="coredns-66bc5c9577-2flk2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2flk2-" Jan 27 12:54:17.205747 containerd[1598]: 2026-01-27 12:54:17.057 [INFO][4775] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784" Namespace="kube-system" Pod="coredns-66bc5c9577-2flk2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2flk2-eth0" Jan 27 12:54:17.205747 containerd[1598]: 2026-01-27 12:54:17.100 [INFO][4789] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784" HandleID="k8s-pod-network.7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784" Workload="localhost-k8s-coredns--66bc5c9577--2flk2-eth0" Jan 27 12:54:17.205747 containerd[1598]: 2026-01-27 12:54:17.101 [INFO][4789] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784" HandleID="k8s-pod-network.7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784" Workload="localhost-k8s-coredns--66bc5c9577--2flk2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-2flk2", "timestamp":"2026-01-27 12:54:17.100673768 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 12:54:17.205747 containerd[1598]: 2026-01-27 12:54:17.101 [INFO][4789] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 12:54:17.205747 containerd[1598]: 2026-01-27 12:54:17.101 [INFO][4789] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 12:54:17.205747 containerd[1598]: 2026-01-27 12:54:17.101 [INFO][4789] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 27 12:54:17.205747 containerd[1598]: 2026-01-27 12:54:17.112 [INFO][4789] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784" host="localhost" Jan 27 12:54:17.205747 containerd[1598]: 2026-01-27 12:54:17.121 [INFO][4789] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 27 12:54:17.205747 containerd[1598]: 2026-01-27 12:54:17.128 [INFO][4789] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 27 12:54:17.205747 containerd[1598]: 2026-01-27 12:54:17.134 [INFO][4789] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 27 12:54:17.205747 containerd[1598]: 2026-01-27 12:54:17.138 [INFO][4789] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 27 12:54:17.205747 containerd[1598]: 2026-01-27 12:54:17.138 [INFO][4789] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784" host="localhost" Jan 27 12:54:17.205747 containerd[1598]: 2026-01-27 12:54:17.141 [INFO][4789] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784 Jan 27 12:54:17.205747 containerd[1598]: 2026-01-27 12:54:17.147 [INFO][4789] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784" host="localhost" Jan 27 12:54:17.205747 containerd[1598]: 2026-01-27 12:54:17.159 [INFO][4789] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784" host="localhost" Jan 27 12:54:17.205747 containerd[1598]: 2026-01-27 12:54:17.159 [INFO][4789] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784" host="localhost" Jan 27 12:54:17.205747 containerd[1598]: 2026-01-27 12:54:17.159 [INFO][4789] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 12:54:17.205747 containerd[1598]: 2026-01-27 12:54:17.159 [INFO][4789] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784" HandleID="k8s-pod-network.7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784" Workload="localhost-k8s-coredns--66bc5c9577--2flk2-eth0" Jan 27 12:54:17.206778 containerd[1598]: 2026-01-27 12:54:17.163 [INFO][4775] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784" Namespace="kube-system" Pod="coredns-66bc5c9577-2flk2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2flk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--2flk2-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e9a3713f-f0ca-48fe-b261-15054e0b1d7d", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 12, 53, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-2flk2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68157f3b502", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 12:54:17.206778 containerd[1598]: 2026-01-27 12:54:17.163 [INFO][4775] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784" Namespace="kube-system" Pod="coredns-66bc5c9577-2flk2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2flk2-eth0" Jan 27 12:54:17.206778 containerd[1598]: 2026-01-27 12:54:17.163 [INFO][4775] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68157f3b502 ContainerID="7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784" Namespace="kube-system" Pod="coredns-66bc5c9577-2flk2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2flk2-eth0" Jan 27 12:54:17.206778 containerd[1598]: 2026-01-27 12:54:17.181 [INFO][4775] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784" Namespace="kube-system" Pod="coredns-66bc5c9577-2flk2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2flk2-eth0" Jan 27 12:54:17.206778 containerd[1598]: 2026-01-27 12:54:17.181 [INFO][4775] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784" Namespace="kube-system" Pod="coredns-66bc5c9577-2flk2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2flk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--2flk2-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e9a3713f-f0ca-48fe-b261-15054e0b1d7d", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 12, 53, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784", Pod:"coredns-66bc5c9577-2flk2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68157f3b502", MAC:"7e:40:02:9b:84:53", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 12:54:17.206778 containerd[1598]: 2026-01-27 12:54:17.199 [INFO][4775] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784" Namespace="kube-system" Pod="coredns-66bc5c9577-2flk2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2flk2-eth0" Jan 27 12:54:17.234572 containerd[1598]: time="2026-01-27T12:54:17.234482251Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:54:17.237988 containerd[1598]: time="2026-01-27T12:54:17.237787430Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 12:54:17.237988 containerd[1598]: time="2026-01-27T12:54:17.237873917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 12:54:17.239300 kubelet[2768]: E0127 12:54:17.239230 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 12:54:17.239300 kubelet[2768]: E0127 12:54:17.239290 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 12:54:17.240016 kubelet[2768]: E0127 12:54:17.239528 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6df48b7979-8w89r_calico-apiserver(32d8681f-2b1f-4fad-bc6d-7656e61dae7d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 12:54:17.240016 kubelet[2768]: E0127 12:54:17.239575 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-8w89r" podUID="32d8681f-2b1f-4fad-bc6d-7656e61dae7d" Jan 27 12:54:17.240104 containerd[1598]: time="2026-01-27T12:54:17.239826206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 27 12:54:17.239000 audit[4808]: NETFILTER_CFG table=filter:133 family=2 entries=62 op=nft_register_chain pid=4808 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 12:54:17.239000 audit[4808]: SYSCALL arch=c000003e syscall=46 success=yes exit=27948 a0=3 a1=7ffd66144e60 a2=0 a3=7ffd66144e4c items=0 ppid=4072 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:17.239000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 12:54:17.253793 containerd[1598]: time="2026-01-27T12:54:17.253620035Z" level=info msg="connecting to shim 7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784" address="unix:///run/containerd/s/bf7d87928baa46434e1d7f0e8a996d9a0028b64fa650a95f7db229d025c3f856" namespace=k8s.io protocol=ttrpc version=3 Jan 27 12:54:17.278154 kubelet[2768]: E0127 12:54:17.278054 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-8w89r" podUID="32d8681f-2b1f-4fad-bc6d-7656e61dae7d" Jan 27 12:54:17.279622 kubelet[2768]: E0127 12:54:17.279596 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:54:17.282082 kubelet[2768]: E0127 12:54:17.282036 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mtm9p" podUID="13a845a0-aaa5-4e80-8a2f-691163970ae8" Jan 27 12:54:17.282398 kubelet[2768]: E0127 12:54:17.281856 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-cgdx9" podUID="e6d3c258-6f1e-4868-8f36-862014b4b2fc" Jan 27 12:54:17.317317 systemd[1]: Started cri-containerd-7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784.scope - libcontainer container 7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784. Jan 27 12:54:17.326578 containerd[1598]: time="2026-01-27T12:54:17.326271302Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:54:17.328730 containerd[1598]: time="2026-01-27T12:54:17.328506575Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 27 12:54:17.328730 containerd[1598]: time="2026-01-27T12:54:17.328553781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 27 12:54:17.330820 kubelet[2768]: E0127 12:54:17.329656 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 12:54:17.330820 kubelet[2768]: E0127 12:54:17.330055 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 12:54:17.330820 kubelet[2768]: E0127 12:54:17.330135 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5d95ff6778-flxqp_calico-system(518046d9-b7bc-493b-96b2-44b9979317ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 27 12:54:17.330820 kubelet[2768]: E0127 12:54:17.330226 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d95ff6778-flxqp" podUID="518046d9-b7bc-493b-96b2-44b9979317ed" Jan 27 12:54:17.357000 audit: BPF prog-id=246 op=LOAD Jan 27 12:54:17.358000 audit: BPF prog-id=247 op=LOAD Jan 27 12:54:17.358000 audit[4829]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=4817 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:17.358000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764633737633761366631343463346465383138343263323266633439 Jan 27 12:54:17.359000 audit: BPF prog-id=247 op=UNLOAD Jan 27 12:54:17.359000 audit[4829]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4817 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:17.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764633737633761366631343463346465383138343263323266633439 Jan 27 12:54:17.360000 audit: BPF prog-id=248 op=LOAD Jan 27 12:54:17.360000 audit[4829]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=4817 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:17.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764633737633761366631343463346465383138343263323266633439 Jan 27 12:54:17.361000 audit: BPF prog-id=249 op=LOAD Jan 27 12:54:17.361000 audit[4829]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=4817 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:17.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764633737633761366631343463346465383138343263323266633439 Jan 27 12:54:17.362000 audit: BPF prog-id=249 op=UNLOAD Jan 27 12:54:17.362000 audit[4829]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4817 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:17.362000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764633737633761366631343463346465383138343263323266633439 Jan 27 12:54:17.362000 audit: BPF prog-id=248 op=UNLOAD Jan 27 12:54:17.362000 audit[4829]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4817 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:17.362000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764633737633761366631343463346465383138343263323266633439 Jan 27 12:54:17.362000 audit: BPF prog-id=250 op=LOAD Jan 27 12:54:17.362000 audit[4829]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=4817 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:17.362000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764633737633761366631343463346465383138343263323266633439 Jan 27 12:54:17.368982 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 27 12:54:17.408015 systemd-networkd[1508]: cali2a8a919100c: Gained IPv6LL Jan 27 12:54:17.428530 containerd[1598]: time="2026-01-27T12:54:17.428467068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2flk2,Uid:e9a3713f-f0ca-48fe-b261-15054e0b1d7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784\"" Jan 27 12:54:17.429677 kubelet[2768]: E0127 12:54:17.429644 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:54:17.436675 containerd[1598]: time="2026-01-27T12:54:17.436560959Z" level=info msg="CreateContainer within sandbox \"7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 27 12:54:17.453320 containerd[1598]: time="2026-01-27T12:54:17.453287012Z" level=info msg="Container 21e1903c23118bb512733474bc795deab3856842183061817b26aaf2fbb35eee: CDI devices from CRI Config.CDIDevices: []" Jan 27 12:54:17.461000 audit[4858]: NETFILTER_CFG table=filter:134 family=2 entries=14 op=nft_register_rule pid=4858 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:54:17.461000 audit[4858]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeba3fa590 a2=0 a3=7ffeba3fa57c items=0 ppid=2925 pid=4858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:17.461000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:54:17.463498 containerd[1598]: time="2026-01-27T12:54:17.463467365Z" level=info msg="CreateContainer within sandbox \"7dc77c7a6f144c4de81842c22fc491923fdac69cf7ec8bd4de8e63527c88e784\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"21e1903c23118bb512733474bc795deab3856842183061817b26aaf2fbb35eee\"" Jan 27 12:54:17.465488 containerd[1598]: time="2026-01-27T12:54:17.465346684Z" level=info msg="StartContainer for \"21e1903c23118bb512733474bc795deab3856842183061817b26aaf2fbb35eee\"" Jan 27 12:54:17.466543 containerd[1598]: time="2026-01-27T12:54:17.466304572Z" level=info msg="connecting to shim 21e1903c23118bb512733474bc795deab3856842183061817b26aaf2fbb35eee" address="unix:///run/containerd/s/bf7d87928baa46434e1d7f0e8a996d9a0028b64fa650a95f7db229d025c3f856" protocol=ttrpc version=3 Jan 27 12:54:17.469000 audit[4858]: NETFILTER_CFG table=nat:135 family=2 entries=20 op=nft_register_rule pid=4858 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:54:17.469000 audit[4858]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffeba3fa590 a2=0 a3=7ffeba3fa57c items=0 ppid=2925 pid=4858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:17.469000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:54:17.501283 systemd[1]: Started cri-containerd-21e1903c23118bb512733474bc795deab3856842183061817b26aaf2fbb35eee.scope - libcontainer container 21e1903c23118bb512733474bc795deab3856842183061817b26aaf2fbb35eee. Jan 27 12:54:17.519000 audit: BPF prog-id=251 op=LOAD Jan 27 12:54:17.520000 audit: BPF prog-id=252 op=LOAD Jan 27 12:54:17.520000 audit[4860]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4817 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:17.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231653139303363323331313862623531323733333437346263373935 Jan 27 12:54:17.520000 audit: BPF prog-id=252 op=UNLOAD Jan 27 12:54:17.520000 audit[4860]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4817 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:17.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231653139303363323331313862623531323733333437346263373935 Jan 27 12:54:17.521000 audit: BPF prog-id=253 op=LOAD Jan 27 12:54:17.521000 audit[4860]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4817 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:17.521000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231653139303363323331313862623531323733333437346263373935 Jan 27 12:54:17.521000 audit: BPF prog-id=254 op=LOAD Jan 27 12:54:17.521000 audit[4860]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4817 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:17.521000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231653139303363323331313862623531323733333437346263373935 Jan 27 12:54:17.522000 audit: BPF prog-id=254 op=UNLOAD Jan 27 12:54:17.522000 audit[4860]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4817 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:17.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231653139303363323331313862623531323733333437346263373935 Jan 27 12:54:17.522000 audit: BPF prog-id=253 op=UNLOAD Jan 27 12:54:17.522000 audit[4860]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4817 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:17.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231653139303363323331313862623531323733333437346263373935 Jan 27 12:54:17.522000 audit: BPF prog-id=255 op=LOAD Jan 27 12:54:17.522000 audit[4860]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4817 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:17.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231653139303363323331313862623531323733333437346263373935 Jan 27 12:54:17.554821 containerd[1598]: time="2026-01-27T12:54:17.554671893Z" level=info msg="StartContainer for \"21e1903c23118bb512733474bc795deab3856842183061817b26aaf2fbb35eee\" returns successfully" Jan 27 12:54:17.727221 systemd-networkd[1508]: calic4bc24f12c3: Gained IPv6LL Jan 27 12:54:18.175273 systemd-networkd[1508]: calid3517555358: Gained IPv6LL Jan 27 12:54:18.284312 kubelet[2768]: E0127 12:54:18.284073 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:54:18.285582 kubelet[2768]: E0127 12:54:18.285468 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mtm9p" podUID="13a845a0-aaa5-4e80-8a2f-691163970ae8" Jan 27 12:54:18.285582 kubelet[2768]: E0127 12:54:18.285466 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-8w89r" podUID="32d8681f-2b1f-4fad-bc6d-7656e61dae7d" Jan 27 12:54:18.285970 kubelet[2768]: E0127 12:54:18.285609 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d95ff6778-flxqp" podUID="518046d9-b7bc-493b-96b2-44b9979317ed" Jan 27 12:54:18.331953 kubelet[2768]: I0127 12:54:18.331811 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2flk2" podStartSLOduration=39.331795943 podStartE2EDuration="39.331795943s" podCreationTimestamp="2026-01-27 12:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:54:18.302594006 +0000 UTC m=+45.476504437" watchObservedRunningTime="2026-01-27 12:54:18.331795943 +0000 UTC m=+45.505706374" Jan 27 12:54:18.489000 audit[4894]: NETFILTER_CFG table=filter:136 family=2 entries=14 op=nft_register_rule pid=4894 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:54:18.493401 kernel: kauditd_printk_skb: 284 callbacks suppressed Jan 27 12:54:18.493557 kernel: audit: type=1325 audit(1769518458.489:730): table=filter:136 family=2 entries=14 op=nft_register_rule pid=4894 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:54:18.489000 audit[4894]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffca54d7780 a2=0 a3=7ffca54d776c items=0 ppid=2925 pid=4894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:18.510503 kernel: audit: type=1300 audit(1769518458.489:730): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffca54d7780 a2=0 a3=7ffca54d776c items=0 ppid=2925 pid=4894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:18.510536 kernel: audit: type=1327 audit(1769518458.489:730): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:54:18.489000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:54:18.511000 audit[4894]: NETFILTER_CFG table=nat:137 family=2 entries=44 op=nft_register_rule pid=4894 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:54:18.520779 kernel: audit: type=1325 audit(1769518458.511:731): table=nat:137 family=2 entries=44 op=nft_register_rule pid=4894 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:54:18.520807 kernel: audit: type=1300 audit(1769518458.511:731): arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffca54d7780 a2=0 a3=7ffca54d776c items=0 ppid=2925 pid=4894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:18.511000 audit[4894]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffca54d7780 a2=0 a3=7ffca54d776c items=0 ppid=2925 pid=4894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:18.511000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:54:18.536750 kernel: audit: type=1327 audit(1769518458.511:731): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:54:19.135370 systemd-networkd[1508]: cali68157f3b502: Gained IPv6LL Jan 27 12:54:19.287436 kubelet[2768]: E0127 12:54:19.286888 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:54:19.565000 audit[4902]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=4902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:54:19.565000 audit[4902]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc4d053b80 a2=0 a3=7ffc4d053b6c items=0 ppid=2925 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:19.584047 kernel: audit: type=1325 audit(1769518459.565:732): table=filter:138 family=2 entries=14 op=nft_register_rule pid=4902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:54:19.584114 kernel: audit: type=1300 audit(1769518459.565:732): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc4d053b80 a2=0 a3=7ffc4d053b6c items=0 ppid=2925 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:19.584132 kernel: audit: type=1327 audit(1769518459.565:732): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:54:19.565000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:54:19.603000 audit[4902]: NETFILTER_CFG table=nat:139 family=2 entries=56 op=nft_register_chain pid=4902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:54:19.603000 audit[4902]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc4d053b80 a2=0 a3=7ffc4d053b6c items=0 ppid=2925 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:54:19.603000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:54:19.612002 kernel: audit: type=1325 audit(1769518459.603:733): table=nat:139 family=2 entries=56 op=nft_register_chain pid=4902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:54:20.289487 kubelet[2768]: E0127 12:54:20.289365 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:54:21.292611 kubelet[2768]: E0127 12:54:21.292513 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:54:29.988776 containerd[1598]: time="2026-01-27T12:54:29.988608608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 27 12:54:30.063749 containerd[1598]: time="2026-01-27T12:54:30.063510151Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:54:30.065646 containerd[1598]: time="2026-01-27T12:54:30.065514528Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 27 12:54:30.065646 containerd[1598]: time="2026-01-27T12:54:30.065601821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 27 12:54:30.066345 kubelet[2768]: E0127 12:54:30.066113 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 12:54:30.066345 kubelet[2768]: E0127 12:54:30.066193 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 12:54:30.066865 kubelet[2768]: E0127 12:54:30.066540 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-5vwvj_calico-system(6af69036-827e-49bb-8e7c-3940b856830f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 27 12:54:30.067025 containerd[1598]: time="2026-01-27T12:54:30.066870755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 27 12:54:30.141803 containerd[1598]: time="2026-01-27T12:54:30.141612890Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:54:30.143537 containerd[1598]: time="2026-01-27T12:54:30.143404431Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 27 12:54:30.143928 kubelet[2768]: E0127 12:54:30.143805 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 12:54:30.143994 kubelet[2768]: E0127 12:54:30.143983 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 12:54:30.144427 kubelet[2768]: E0127 12:54:30.144310 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-9dc77d7c4-lxzpr_calico-system(7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 27 12:54:30.152416 containerd[1598]: time="2026-01-27T12:54:30.143460700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 27 12:54:30.152416 containerd[1598]: time="2026-01-27T12:54:30.144369898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 27 12:54:30.211996 containerd[1598]: time="2026-01-27T12:54:30.211877753Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:54:30.213234 containerd[1598]: time="2026-01-27T12:54:30.213180516Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 27 12:54:30.213234 containerd[1598]: time="2026-01-27T12:54:30.213269708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 27 12:54:30.213501 kubelet[2768]: E0127 12:54:30.213414 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 12:54:30.213501 kubelet[2768]: E0127 12:54:30.213449 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 12:54:30.213673 kubelet[2768]: E0127 12:54:30.213641 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-5vwvj_calico-system(6af69036-827e-49bb-8e7c-3940b856830f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 27 12:54:30.213766 kubelet[2768]: E0127 12:54:30.213679 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5vwvj" podUID="6af69036-827e-49bb-8e7c-3940b856830f" Jan 27 12:54:30.214188 containerd[1598]: time="2026-01-27T12:54:30.214151856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 27 12:54:30.280098 containerd[1598]: time="2026-01-27T12:54:30.279845101Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:54:30.282140 containerd[1598]: time="2026-01-27T12:54:30.281989402Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 27 12:54:30.282140 containerd[1598]: time="2026-01-27T12:54:30.282031412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 27 12:54:30.282393 kubelet[2768]: E0127 12:54:30.282316 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 12:54:30.282393 kubelet[2768]: E0127 12:54:30.282372 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 12:54:30.282812 kubelet[2768]: E0127 12:54:30.282635 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-9dc77d7c4-lxzpr_calico-system(7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 27 12:54:30.282812 kubelet[2768]: E0127 12:54:30.282770 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9dc77d7c4-lxzpr" podUID="7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac" Jan 27 12:54:30.990793 containerd[1598]: time="2026-01-27T12:54:30.990420008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 12:54:31.068179 containerd[1598]: time="2026-01-27T12:54:31.068048300Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:54:31.069870 containerd[1598]: time="2026-01-27T12:54:31.069690275Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 12:54:31.069870 containerd[1598]: time="2026-01-27T12:54:31.069802708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 12:54:31.070098 kubelet[2768]: E0127 12:54:31.070049 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 12:54:31.070098 kubelet[2768]: E0127 12:54:31.070084 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 12:54:31.070538 kubelet[2768]: E0127 12:54:31.070219 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6df48b7979-cgdx9_calico-apiserver(e6d3c258-6f1e-4868-8f36-862014b4b2fc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 12:54:31.070538 kubelet[2768]: E0127 12:54:31.070246 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-cgdx9" podUID="e6d3c258-6f1e-4868-8f36-862014b4b2fc" Jan 27 12:54:31.071030 containerd[1598]: time="2026-01-27T12:54:31.071005842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 27 12:54:31.147627 containerd[1598]: time="2026-01-27T12:54:31.147460110Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:54:31.149118 containerd[1598]: time="2026-01-27T12:54:31.149001849Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 27 12:54:31.149176 containerd[1598]: time="2026-01-27T12:54:31.149121271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 27 12:54:31.149513 kubelet[2768]: E0127 12:54:31.149422 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 12:54:31.149513 kubelet[2768]: E0127 12:54:31.149493 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 12:54:31.149984 kubelet[2768]: E0127 12:54:31.149812 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5d95ff6778-flxqp_calico-system(518046d9-b7bc-493b-96b2-44b9979317ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 27 12:54:31.149984 kubelet[2768]: E0127 12:54:31.149864 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d95ff6778-flxqp" podUID="518046d9-b7bc-493b-96b2-44b9979317ed" Jan 27 12:54:31.150141 containerd[1598]: time="2026-01-27T12:54:31.150118420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 27 12:54:31.239223 containerd[1598]: time="2026-01-27T12:54:31.239129973Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:54:31.241306 containerd[1598]: time="2026-01-27T12:54:31.241063313Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 27 12:54:31.241306 containerd[1598]: time="2026-01-27T12:54:31.241117708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 27 12:54:31.241575 kubelet[2768]: E0127 12:54:31.241458 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 12:54:31.241575 kubelet[2768]: E0127 12:54:31.241540 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 12:54:31.241662 kubelet[2768]: E0127 12:54:31.241612 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-mtm9p_calico-system(13a845a0-aaa5-4e80-8a2f-691163970ae8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 27 12:54:31.241662 kubelet[2768]: E0127 12:54:31.241651 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mtm9p" podUID="13a845a0-aaa5-4e80-8a2f-691163970ae8" Jan 27 12:54:31.988460 containerd[1598]: time="2026-01-27T12:54:31.988398397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 12:54:32.047125 containerd[1598]: time="2026-01-27T12:54:32.047017268Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:54:32.048605 containerd[1598]: time="2026-01-27T12:54:32.048526747Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 12:54:32.048605 containerd[1598]: time="2026-01-27T12:54:32.048582228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 12:54:32.048834 kubelet[2768]: E0127 12:54:32.048772 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 12:54:32.048834 kubelet[2768]: E0127 12:54:32.048828 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 12:54:32.049012 kubelet[2768]: E0127 12:54:32.048988 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6df48b7979-8w89r_calico-apiserver(32d8681f-2b1f-4fad-bc6d-7656e61dae7d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 12:54:32.049069 kubelet[2768]: E0127 12:54:32.049018 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-8w89r" podUID="32d8681f-2b1f-4fad-bc6d-7656e61dae7d" Jan 27 12:54:42.989533 kubelet[2768]: E0127 12:54:42.989320 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d95ff6778-flxqp" podUID="518046d9-b7bc-493b-96b2-44b9979317ed" Jan 27 12:54:42.990491 kubelet[2768]: E0127 12:54:42.990228 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5vwvj" podUID="6af69036-827e-49bb-8e7c-3940b856830f" Jan 27 12:54:43.351024 kubelet[2768]: E0127 12:54:43.350880 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:54:43.988294 kubelet[2768]: E0127 12:54:43.988165 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-8w89r" podUID="32d8681f-2b1f-4fad-bc6d-7656e61dae7d" Jan 27 12:54:44.991093 kubelet[2768]: E0127 12:54:44.990805 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9dc77d7c4-lxzpr" podUID="7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac" Jan 27 12:54:45.988110 kubelet[2768]: E0127 12:54:45.987681 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mtm9p" podUID="13a845a0-aaa5-4e80-8a2f-691163970ae8" Jan 27 12:54:45.991029 kubelet[2768]: E0127 12:54:45.990989 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-cgdx9" podUID="e6d3c258-6f1e-4868-8f36-862014b4b2fc" Jan 27 12:54:56.988985 containerd[1598]: time="2026-01-27T12:54:56.988749878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 27 12:54:57.079634 containerd[1598]: time="2026-01-27T12:54:57.079390153Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:54:57.082184 containerd[1598]: time="2026-01-27T12:54:57.082092877Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 27 12:54:57.084206 containerd[1598]: time="2026-01-27T12:54:57.084147204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 27 12:54:57.085404 kubelet[2768]: E0127 12:54:57.084994 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 12:54:57.085404 kubelet[2768]: E0127 12:54:57.085053 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 12:54:57.085404 kubelet[2768]: E0127 12:54:57.085319 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5d95ff6778-flxqp_calico-system(518046d9-b7bc-493b-96b2-44b9979317ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 27 12:54:57.085404 kubelet[2768]: E0127 12:54:57.085361 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d95ff6778-flxqp" podUID="518046d9-b7bc-493b-96b2-44b9979317ed" Jan 27 12:54:57.089207 containerd[1598]: time="2026-01-27T12:54:57.089013589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 27 12:54:57.160179 containerd[1598]: time="2026-01-27T12:54:57.160093209Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:54:57.162127 containerd[1598]: time="2026-01-27T12:54:57.162007293Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 27 12:54:57.162127 containerd[1598]: time="2026-01-27T12:54:57.162041660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 27 12:54:57.162480 kubelet[2768]: E0127 12:54:57.162288 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 12:54:57.162480 kubelet[2768]: E0127 12:54:57.162337 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 12:54:57.162955 kubelet[2768]: E0127 12:54:57.162468 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-5vwvj_calico-system(6af69036-827e-49bb-8e7c-3940b856830f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 27 12:54:57.164485 containerd[1598]: time="2026-01-27T12:54:57.164415208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 27 12:54:57.495377 containerd[1598]: time="2026-01-27T12:54:57.495124290Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:54:57.497242 containerd[1598]: time="2026-01-27T12:54:57.497100150Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 27 12:54:57.497242 containerd[1598]: time="2026-01-27T12:54:57.497178707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 27 12:54:57.499231 kubelet[2768]: E0127 12:54:57.499184 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 12:54:57.500021 kubelet[2768]: E0127 12:54:57.499354 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 12:54:57.500021 kubelet[2768]: E0127 12:54:57.499439 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-5vwvj_calico-system(6af69036-827e-49bb-8e7c-3940b856830f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 27 12:54:57.500021 kubelet[2768]: E0127 12:54:57.499517 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5vwvj" podUID="6af69036-827e-49bb-8e7c-3940b856830f" Jan 27 12:54:57.987688 kubelet[2768]: E0127 12:54:57.987531 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:54:58.993980 containerd[1598]: time="2026-01-27T12:54:58.993566316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 12:54:59.206286 containerd[1598]: time="2026-01-27T12:54:59.206074194Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:54:59.207875 containerd[1598]: time="2026-01-27T12:54:59.207761757Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 12:54:59.207875 containerd[1598]: time="2026-01-27T12:54:59.207835935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 12:54:59.208418 kubelet[2768]: E0127 12:54:59.208031 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 12:54:59.208418 kubelet[2768]: E0127 12:54:59.208066 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 12:54:59.208418 kubelet[2768]: E0127 12:54:59.208210 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6df48b7979-8w89r_calico-apiserver(32d8681f-2b1f-4fad-bc6d-7656e61dae7d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 12:54:59.208418 kubelet[2768]: E0127 12:54:59.208240 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-8w89r" podUID="32d8681f-2b1f-4fad-bc6d-7656e61dae7d" Jan 27 12:54:59.210072 containerd[1598]: time="2026-01-27T12:54:59.209859699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 27 12:54:59.277870 containerd[1598]: time="2026-01-27T12:54:59.277543627Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:54:59.279710 containerd[1598]: time="2026-01-27T12:54:59.279519452Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 27 12:54:59.279710 containerd[1598]: time="2026-01-27T12:54:59.279605231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 27 12:54:59.280788 kubelet[2768]: E0127 12:54:59.279876 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 12:54:59.280788 kubelet[2768]: E0127 12:54:59.280763 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 12:54:59.281030 kubelet[2768]: E0127 12:54:59.280857 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-9dc77d7c4-lxzpr_calico-system(7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 27 12:54:59.282867 containerd[1598]: time="2026-01-27T12:54:59.282830855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 27 12:54:59.403408 containerd[1598]: time="2026-01-27T12:54:59.403183341Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:54:59.405401 containerd[1598]: time="2026-01-27T12:54:59.405297228Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 27 12:54:59.405401 containerd[1598]: time="2026-01-27T12:54:59.405390914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 27 12:54:59.405785 kubelet[2768]: E0127 12:54:59.405573 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 12:54:59.405952 kubelet[2768]: E0127 12:54:59.405867 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 12:54:59.406110 kubelet[2768]: E0127 12:54:59.406033 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-9dc77d7c4-lxzpr_calico-system(7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 27 12:54:59.406110 kubelet[2768]: E0127 12:54:59.406083 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9dc77d7c4-lxzpr" podUID="7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac" Jan 27 12:54:59.990059 containerd[1598]: time="2026-01-27T12:54:59.989835941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 27 12:55:00.081544 containerd[1598]: time="2026-01-27T12:55:00.081441621Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:55:00.083427 containerd[1598]: time="2026-01-27T12:55:00.083295780Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 27 12:55:00.083427 containerd[1598]: time="2026-01-27T12:55:00.083367420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 27 12:55:00.083789 kubelet[2768]: E0127 12:55:00.083740 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 12:55:00.083841 kubelet[2768]: E0127 12:55:00.083799 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 12:55:00.084052 kubelet[2768]: E0127 12:55:00.083989 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-mtm9p_calico-system(13a845a0-aaa5-4e80-8a2f-691163970ae8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 27 12:55:00.084094 kubelet[2768]: E0127 12:55:00.084071 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mtm9p" podUID="13a845a0-aaa5-4e80-8a2f-691163970ae8" Jan 27 12:55:00.993289 containerd[1598]: time="2026-01-27T12:55:00.993156302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 12:55:01.171171 containerd[1598]: time="2026-01-27T12:55:01.170860562Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:55:01.173954 containerd[1598]: time="2026-01-27T12:55:01.173384665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 12:55:01.174391 containerd[1598]: time="2026-01-27T12:55:01.174029687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 12:55:01.174946 kubelet[2768]: E0127 12:55:01.174708 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 12:55:01.175352 kubelet[2768]: E0127 12:55:01.175010 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 12:55:01.175861 kubelet[2768]: E0127 12:55:01.175827 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6df48b7979-cgdx9_calico-apiserver(e6d3c258-6f1e-4868-8f36-862014b4b2fc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 12:55:01.176444 kubelet[2768]: E0127 12:55:01.175884 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-cgdx9" podUID="e6d3c258-6f1e-4868-8f36-862014b4b2fc" Jan 27 12:55:05.987314 kubelet[2768]: E0127 12:55:05.987124 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:55:07.987797 kubelet[2768]: E0127 12:55:07.987279 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:55:07.987797 kubelet[2768]: E0127 12:55:07.987604 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d95ff6778-flxqp" podUID="518046d9-b7bc-493b-96b2-44b9979317ed" Jan 27 12:55:09.991216 kubelet[2768]: E0127 12:55:09.991123 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5vwvj" podUID="6af69036-827e-49bb-8e7c-3940b856830f" Jan 27 12:55:10.495506 systemd[1]: Started sshd@7-10.0.0.130:22-10.0.0.1:43918.service - OpenSSH per-connection server daemon (10.0.0.1:43918). Jan 27 12:55:10.498179 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 27 12:55:10.498247 kernel: audit: type=1130 audit(1769518510.494:734): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.130:22-10.0.0.1:43918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:10.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.130:22-10.0.0.1:43918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:10.657000 audit[4965]: USER_ACCT pid=4965 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:10.662820 sshd-session[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:55:10.665032 sshd[4965]: Accepted publickey for core from 10.0.0.1 port 43918 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:55:10.669694 systemd-logind[1575]: New session 9 of user core. Jan 27 12:55:10.660000 audit[4965]: CRED_ACQ pid=4965 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:10.679718 kernel: audit: type=1101 audit(1769518510.657:735): pid=4965 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:10.679787 kernel: audit: type=1103 audit(1769518510.660:736): pid=4965 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:10.685547 kernel: audit: type=1006 audit(1769518510.660:737): pid=4965 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 27 12:55:10.660000 audit[4965]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc41dccad0 a2=3 a3=0 items=0 ppid=1 pid=4965 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:55:10.660000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:55:10.698174 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 27 12:55:10.701674 kernel: audit: type=1300 audit(1769518510.660:737): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc41dccad0 a2=3 a3=0 items=0 ppid=1 pid=4965 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:55:10.701841 kernel: audit: type=1327 audit(1769518510.660:737): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:55:10.702000 audit[4965]: USER_START pid=4965 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:10.705000 audit[4972]: CRED_ACQ pid=4972 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:10.731857 kernel: audit: type=1105 audit(1769518510.702:738): pid=4965 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:10.732276 kernel: audit: type=1103 audit(1769518510.705:739): pid=4972 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:10.858232 sshd[4972]: Connection closed by 10.0.0.1 port 43918 Jan 27 12:55:10.858555 sshd-session[4965]: pam_unix(sshd:session): session closed for user core Jan 27 12:55:10.861000 audit[4965]: USER_END pid=4965 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:10.866678 systemd[1]: sshd@7-10.0.0.130:22-10.0.0.1:43918.service: Deactivated successfully. Jan 27 12:55:10.870137 systemd[1]: session-9.scope: Deactivated successfully. Jan 27 12:55:10.875085 systemd-logind[1575]: Session 9 logged out. Waiting for processes to exit. Jan 27 12:55:10.861000 audit[4965]: CRED_DISP pid=4965 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:10.876456 systemd-logind[1575]: Removed session 9. Jan 27 12:55:10.887674 kernel: audit: type=1106 audit(1769518510.861:740): pid=4965 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:10.887746 kernel: audit: type=1104 audit(1769518510.861:741): pid=4965 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:10.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.130:22-10.0.0.1:43918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:10.992519 kubelet[2768]: E0127 12:55:10.992348 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-8w89r" podUID="32d8681f-2b1f-4fad-bc6d-7656e61dae7d" Jan 27 12:55:12.987521 kubelet[2768]: E0127 12:55:12.987380 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:55:13.988069 kubelet[2768]: E0127 12:55:13.987758 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-cgdx9" podUID="e6d3c258-6f1e-4868-8f36-862014b4b2fc" Jan 27 12:55:13.992365 kubelet[2768]: E0127 12:55:13.992276 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9dc77d7c4-lxzpr" podUID="7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac" Jan 27 12:55:14.989492 kubelet[2768]: E0127 12:55:14.989363 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mtm9p" podUID="13a845a0-aaa5-4e80-8a2f-691163970ae8" Jan 27 12:55:15.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.130:22-10.0.0.1:34876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:15.877593 systemd[1]: Started sshd@8-10.0.0.130:22-10.0.0.1:34876.service - OpenSSH per-connection server daemon (10.0.0.1:34876). Jan 27 12:55:15.881202 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 12:55:15.881294 kernel: audit: type=1130 audit(1769518515.876:743): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.130:22-10.0.0.1:34876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:16.001000 audit[5015]: USER_ACCT pid=5015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:16.002561 sshd[5015]: Accepted publickey for core from 10.0.0.1 port 34876 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:55:16.005783 sshd-session[5015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:55:16.013033 systemd-logind[1575]: New session 10 of user core. Jan 27 12:55:16.003000 audit[5015]: CRED_ACQ pid=5015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:16.035153 kernel: audit: type=1101 audit(1769518516.001:744): pid=5015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:16.035250 kernel: audit: type=1103 audit(1769518516.003:745): pid=5015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:16.035305 kernel: audit: type=1006 audit(1769518516.003:746): pid=5015 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 27 12:55:16.003000 audit[5015]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0ca72050 a2=3 a3=0 items=0 ppid=1 pid=5015 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:55:16.064227 kernel: audit: type=1300 audit(1769518516.003:746): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0ca72050 a2=3 a3=0 items=0 ppid=1 pid=5015 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:55:16.064541 kernel: audit: type=1327 audit(1769518516.003:746): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:55:16.003000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:55:16.076351 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 27 12:55:16.079000 audit[5015]: USER_START pid=5015 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:16.079000 audit[5019]: CRED_ACQ pid=5019 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:16.112054 kernel: audit: type=1105 audit(1769518516.079:747): pid=5015 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:16.112180 kernel: audit: type=1103 audit(1769518516.079:748): pid=5019 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:16.247093 sshd[5019]: Connection closed by 10.0.0.1 port 34876 Jan 27 12:55:16.245539 sshd-session[5015]: pam_unix(sshd:session): session closed for user core Jan 27 12:55:16.248000 audit[5015]: USER_END pid=5015 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:16.256725 systemd[1]: sshd@8-10.0.0.130:22-10.0.0.1:34876.service: Deactivated successfully. Jan 27 12:55:16.261316 systemd[1]: session-10.scope: Deactivated successfully. Jan 27 12:55:16.262978 kernel: audit: type=1106 audit(1769518516.248:749): pid=5015 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:16.265695 systemd-logind[1575]: Session 10 logged out. Waiting for processes to exit. Jan 27 12:55:16.248000 audit[5015]: CRED_DISP pid=5015 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:16.270124 systemd-logind[1575]: Removed session 10. Jan 27 12:55:16.281061 kernel: audit: type=1104 audit(1769518516.248:750): pid=5015 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:16.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.130:22-10.0.0.1:34876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:20.988850 kubelet[2768]: E0127 12:55:20.988410 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d95ff6778-flxqp" podUID="518046d9-b7bc-493b-96b2-44b9979317ed" Jan 27 12:55:21.266777 systemd[1]: Started sshd@9-10.0.0.130:22-10.0.0.1:34886.service - OpenSSH per-connection server daemon (10.0.0.1:34886). Jan 27 12:55:21.272785 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 12:55:21.272822 kernel: audit: type=1130 audit(1769518521.266:752): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.130:22-10.0.0.1:34886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:21.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.130:22-10.0.0.1:34886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:21.350000 audit[5034]: USER_ACCT pid=5034 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:21.354334 sshd-session[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:55:21.356272 sshd[5034]: Accepted publickey for core from 10.0.0.1 port 34886 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:55:21.351000 audit[5034]: CRED_ACQ pid=5034 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:21.364390 systemd-logind[1575]: New session 11 of user core. Jan 27 12:55:21.375763 kernel: audit: type=1101 audit(1769518521.350:753): pid=5034 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:21.375811 kernel: audit: type=1103 audit(1769518521.351:754): pid=5034 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:21.375832 kernel: audit: type=1006 audit(1769518521.351:755): pid=5034 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 27 12:55:21.382865 kernel: audit: type=1300 audit(1769518521.351:755): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc12d61fa0 a2=3 a3=0 items=0 ppid=1 pid=5034 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:55:21.351000 audit[5034]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc12d61fa0 a2=3 a3=0 items=0 ppid=1 pid=5034 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:55:21.397087 kernel: audit: type=1327 audit(1769518521.351:755): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:55:21.351000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:55:21.403502 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 27 12:55:21.407000 audit[5034]: USER_START pid=5034 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:21.410000 audit[5038]: CRED_ACQ pid=5038 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:21.438067 kernel: audit: type=1105 audit(1769518521.407:756): pid=5034 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:21.438142 kernel: audit: type=1103 audit(1769518521.410:757): pid=5038 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:21.520668 sshd[5038]: Connection closed by 10.0.0.1 port 34886 Jan 27 12:55:21.522019 sshd-session[5034]: pam_unix(sshd:session): session closed for user core Jan 27 12:55:21.523000 audit[5034]: USER_END pid=5034 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:21.527783 systemd[1]: sshd@9-10.0.0.130:22-10.0.0.1:34886.service: Deactivated successfully. Jan 27 12:55:21.530727 systemd[1]: session-11.scope: Deactivated successfully. Jan 27 12:55:21.533240 systemd-logind[1575]: Session 11 logged out. Waiting for processes to exit. Jan 27 12:55:21.534533 systemd-logind[1575]: Removed session 11. Jan 27 12:55:21.523000 audit[5034]: CRED_DISP pid=5034 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:21.550524 kernel: audit: type=1106 audit(1769518521.523:758): pid=5034 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:21.550612 kernel: audit: type=1104 audit(1769518521.523:759): pid=5034 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:21.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.130:22-10.0.0.1:34886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:21.989308 kubelet[2768]: E0127 12:55:21.989191 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5vwvj" podUID="6af69036-827e-49bb-8e7c-3940b856830f" Jan 27 12:55:21.989308 kubelet[2768]: E0127 12:55:21.989221 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-8w89r" podUID="32d8681f-2b1f-4fad-bc6d-7656e61dae7d" Jan 27 12:55:25.988887 kubelet[2768]: E0127 12:55:25.988471 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-cgdx9" podUID="e6d3c258-6f1e-4868-8f36-862014b4b2fc" Jan 27 12:55:26.541256 systemd[1]: Started sshd@10-10.0.0.130:22-10.0.0.1:51442.service - OpenSSH per-connection server daemon (10.0.0.1:51442). Jan 27 12:55:26.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.130:22-10.0.0.1:51442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:26.544603 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 12:55:26.544704 kernel: audit: type=1130 audit(1769518526.540:761): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.130:22-10.0.0.1:51442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:26.616000 audit[5053]: USER_ACCT pid=5053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:26.617794 sshd[5053]: Accepted publickey for core from 10.0.0.1 port 51442 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:55:26.620834 sshd-session[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:55:26.628070 systemd-logind[1575]: New session 12 of user core. Jan 27 12:55:26.618000 audit[5053]: CRED_ACQ pid=5053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:26.646184 kernel: audit: type=1101 audit(1769518526.616:762): pid=5053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:26.646243 kernel: audit: type=1103 audit(1769518526.618:763): pid=5053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:26.646279 kernel: audit: type=1006 audit(1769518526.618:764): pid=5053 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 27 12:55:26.653709 kernel: audit: type=1300 audit(1769518526.618:764): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff87497390 a2=3 a3=0 items=0 ppid=1 pid=5053 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:55:26.618000 audit[5053]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff87497390 a2=3 a3=0 items=0 ppid=1 pid=5053 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:55:26.654247 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 27 12:55:26.665772 kernel: audit: type=1327 audit(1769518526.618:764): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:55:26.618000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:55:26.660000 audit[5053]: USER_START pid=5053 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:26.693052 kernel: audit: type=1105 audit(1769518526.660:765): pid=5053 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:26.693112 kernel: audit: type=1103 audit(1769518526.662:766): pid=5057 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:26.662000 audit[5057]: CRED_ACQ pid=5057 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:26.767840 sshd[5057]: Connection closed by 10.0.0.1 port 51442 Jan 27 12:55:26.769870 sshd-session[5053]: pam_unix(sshd:session): session closed for user core Jan 27 12:55:26.771000 audit[5053]: USER_END pid=5053 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:26.776072 systemd[1]: sshd@10-10.0.0.130:22-10.0.0.1:51442.service: Deactivated successfully. Jan 27 12:55:26.779012 systemd[1]: session-12.scope: Deactivated successfully. Jan 27 12:55:26.780756 systemd-logind[1575]: Session 12 logged out. Waiting for processes to exit. Jan 27 12:55:26.782692 systemd-logind[1575]: Removed session 12. Jan 27 12:55:26.771000 audit[5053]: CRED_DISP pid=5053 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:26.800385 kernel: audit: type=1106 audit(1769518526.771:767): pid=5053 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:26.800513 kernel: audit: type=1104 audit(1769518526.771:768): pid=5053 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:26.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.130:22-10.0.0.1:51442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:26.989671 kubelet[2768]: E0127 12:55:26.989418 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mtm9p" podUID="13a845a0-aaa5-4e80-8a2f-691163970ae8" Jan 27 12:55:28.991376 kubelet[2768]: E0127 12:55:28.991297 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9dc77d7c4-lxzpr" podUID="7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac" Jan 27 12:55:31.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.130:22-10.0.0.1:51450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:31.783834 systemd[1]: Started sshd@11-10.0.0.130:22-10.0.0.1:51450.service - OpenSSH per-connection server daemon (10.0.0.1:51450). Jan 27 12:55:31.788443 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 12:55:31.788690 kernel: audit: type=1130 audit(1769518531.783:770): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.130:22-10.0.0.1:51450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:31.870000 audit[5071]: USER_ACCT pid=5071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:31.871885 sshd[5071]: Accepted publickey for core from 10.0.0.1 port 51450 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:55:31.874499 sshd-session[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:55:31.881100 systemd-logind[1575]: New session 13 of user core. Jan 27 12:55:31.872000 audit[5071]: CRED_ACQ pid=5071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:31.903698 kernel: audit: type=1101 audit(1769518531.870:771): pid=5071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:31.903780 kernel: audit: type=1103 audit(1769518531.872:772): pid=5071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:31.903819 kernel: audit: type=1006 audit(1769518531.872:773): pid=5071 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 27 12:55:31.913054 kernel: audit: type=1300 audit(1769518531.872:773): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc93993b0 a2=3 a3=0 items=0 ppid=1 pid=5071 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:55:31.872000 audit[5071]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc93993b0 a2=3 a3=0 items=0 ppid=1 pid=5071 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:55:31.872000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:55:31.937079 kernel: audit: type=1327 audit(1769518531.872:773): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:55:31.944568 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 27 12:55:31.948000 audit[5071]: USER_START pid=5071 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:31.969575 kernel: audit: type=1105 audit(1769518531.948:774): pid=5071 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:31.969718 kernel: audit: type=1103 audit(1769518531.950:775): pid=5075 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:31.950000 audit[5075]: CRED_ACQ pid=5075 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:32.045496 sshd[5075]: Connection closed by 10.0.0.1 port 51450 Jan 27 12:55:32.048169 sshd-session[5071]: pam_unix(sshd:session): session closed for user core Jan 27 12:55:32.049000 audit[5071]: USER_END pid=5071 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:32.054714 systemd[1]: sshd@11-10.0.0.130:22-10.0.0.1:51450.service: Deactivated successfully. Jan 27 12:55:32.058517 systemd[1]: session-13.scope: Deactivated successfully. Jan 27 12:55:32.060538 systemd-logind[1575]: Session 13 logged out. Waiting for processes to exit. Jan 27 12:55:32.064885 systemd-logind[1575]: Removed session 13. Jan 27 12:55:32.049000 audit[5071]: CRED_DISP pid=5071 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:32.075726 kernel: audit: type=1106 audit(1769518532.049:776): pid=5071 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:32.075830 kernel: audit: type=1104 audit(1769518532.049:777): pid=5071 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:32.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.130:22-10.0.0.1:51450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:33.988474 kubelet[2768]: E0127 12:55:33.988352 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-8w89r" podUID="32d8681f-2b1f-4fad-bc6d-7656e61dae7d" Jan 27 12:55:34.989589 kubelet[2768]: E0127 12:55:34.988539 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d95ff6778-flxqp" podUID="518046d9-b7bc-493b-96b2-44b9979317ed" Jan 27 12:55:35.988810 kubelet[2768]: E0127 12:55:35.988668 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5vwvj" podUID="6af69036-827e-49bb-8e7c-3940b856830f" Jan 27 12:55:35.989135 kubelet[2768]: E0127 12:55:35.989046 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:55:37.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.130:22-10.0.0.1:39832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:37.062365 systemd[1]: Started sshd@12-10.0.0.130:22-10.0.0.1:39832.service - OpenSSH per-connection server daemon (10.0.0.1:39832). Jan 27 12:55:37.067225 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 12:55:37.067262 kernel: audit: type=1130 audit(1769518537.061:779): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.130:22-10.0.0.1:39832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:37.167000 audit[5097]: USER_ACCT pid=5097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:37.169981 sshd[5097]: Accepted publickey for core from 10.0.0.1 port 39832 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:55:37.173126 sshd-session[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:55:37.170000 audit[5097]: CRED_ACQ pid=5097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:37.185460 systemd-logind[1575]: New session 14 of user core. Jan 27 12:55:37.195667 kernel: audit: type=1101 audit(1769518537.167:780): pid=5097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:37.195746 kernel: audit: type=1103 audit(1769518537.170:781): pid=5097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:37.195789 kernel: audit: type=1006 audit(1769518537.170:782): pid=5097 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 27 12:55:37.206016 kernel: audit: type=1300 audit(1769518537.170:782): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff051d4650 a2=3 a3=0 items=0 ppid=1 pid=5097 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:55:37.170000 audit[5097]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff051d4650 a2=3 a3=0 items=0 ppid=1 pid=5097 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:55:37.208271 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 27 12:55:37.170000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:55:37.230241 kernel: audit: type=1327 audit(1769518537.170:782): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:55:37.213000 audit[5097]: USER_START pid=5097 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:37.258985 kernel: audit: type=1105 audit(1769518537.213:783): pid=5097 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:37.259087 kernel: audit: type=1103 audit(1769518537.217:784): pid=5101 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:37.217000 audit[5101]: CRED_ACQ pid=5101 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:37.352375 sshd[5101]: Connection closed by 10.0.0.1 port 39832 Jan 27 12:55:37.352807 sshd-session[5097]: pam_unix(sshd:session): session closed for user core Jan 27 12:55:37.353000 audit[5097]: USER_END pid=5097 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:37.360288 systemd[1]: sshd@12-10.0.0.130:22-10.0.0.1:39832.service: Deactivated successfully. Jan 27 12:55:37.368066 kernel: audit: type=1106 audit(1769518537.353:785): pid=5097 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:37.368154 kernel: audit: type=1104 audit(1769518537.354:786): pid=5097 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:37.354000 audit[5097]: CRED_DISP pid=5097 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:37.369013 systemd[1]: session-14.scope: Deactivated successfully. Jan 27 12:55:37.371667 systemd-logind[1575]: Session 14 logged out. Waiting for processes to exit. Jan 27 12:55:37.378533 systemd-logind[1575]: Removed session 14. Jan 27 12:55:37.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.130:22-10.0.0.1:39832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:37.989698 kubelet[2768]: E0127 12:55:37.989468 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mtm9p" podUID="13a845a0-aaa5-4e80-8a2f-691163970ae8" Jan 27 12:55:37.990402 kubelet[2768]: E0127 12:55:37.989768 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-cgdx9" podUID="e6d3c258-6f1e-4868-8f36-862014b4b2fc" Jan 27 12:55:42.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.130:22-10.0.0.1:54430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:42.369799 systemd[1]: Started sshd@13-10.0.0.130:22-10.0.0.1:54430.service - OpenSSH per-connection server daemon (10.0.0.1:54430). Jan 27 12:55:42.386569 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 12:55:42.386719 kernel: audit: type=1130 audit(1769518542.369:788): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.130:22-10.0.0.1:54430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:42.487000 audit[5120]: USER_ACCT pid=5120 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:42.492568 sshd-session[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:55:42.495040 sshd[5120]: Accepted publickey for core from 10.0.0.1 port 54430 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:55:42.503102 systemd-logind[1575]: New session 15 of user core. Jan 27 12:55:42.489000 audit[5120]: CRED_ACQ pid=5120 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:42.514989 kernel: audit: type=1101 audit(1769518542.487:789): pid=5120 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:42.515079 kernel: audit: type=1103 audit(1769518542.489:790): pid=5120 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:42.515115 kernel: audit: type=1006 audit(1769518542.489:791): pid=5120 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 27 12:55:42.489000 audit[5120]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe8c5e560 a2=3 a3=0 items=0 ppid=1 pid=5120 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:55:42.538283 kernel: audit: type=1300 audit(1769518542.489:791): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe8c5e560 a2=3 a3=0 items=0 ppid=1 pid=5120 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:55:42.489000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:55:42.554188 kernel: audit: type=1327 audit(1769518542.489:791): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:55:42.558222 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 27 12:55:42.562000 audit[5120]: USER_START pid=5120 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:42.567000 audit[5124]: CRED_ACQ pid=5124 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:42.595764 kernel: audit: type=1105 audit(1769518542.562:792): pid=5120 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:42.595840 kernel: audit: type=1103 audit(1769518542.567:793): pid=5124 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:42.711411 sshd[5124]: Connection closed by 10.0.0.1 port 54430 Jan 27 12:55:42.713323 sshd-session[5120]: pam_unix(sshd:session): session closed for user core Jan 27 12:55:42.717000 audit[5120]: USER_END pid=5120 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:42.724674 systemd[1]: sshd@13-10.0.0.130:22-10.0.0.1:54430.service: Deactivated successfully. Jan 27 12:55:42.730494 systemd[1]: session-15.scope: Deactivated successfully. Jan 27 12:55:42.739760 systemd-logind[1575]: Session 15 logged out. Waiting for processes to exit. Jan 27 12:55:42.748808 kernel: audit: type=1106 audit(1769518542.717:794): pid=5120 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:42.748990 kernel: audit: type=1104 audit(1769518542.717:795): pid=5120 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:42.717000 audit[5120]: CRED_DISP pid=5120 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:42.749492 systemd-logind[1575]: Removed session 15. Jan 27 12:55:42.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.130:22-10.0.0.1:54430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:43.988226 containerd[1598]: time="2026-01-27T12:55:43.988070632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 27 12:55:44.099309 containerd[1598]: time="2026-01-27T12:55:44.099195425Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:55:44.101229 containerd[1598]: time="2026-01-27T12:55:44.101146001Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 27 12:55:44.101826 containerd[1598]: time="2026-01-27T12:55:44.101371766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 27 12:55:44.102021 kubelet[2768]: E0127 12:55:44.101945 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 12:55:44.102021 kubelet[2768]: E0127 12:55:44.101982 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 12:55:44.102546 kubelet[2768]: E0127 12:55:44.102225 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-9dc77d7c4-lxzpr_calico-system(7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 27 12:55:44.106781 containerd[1598]: time="2026-01-27T12:55:44.106714950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 27 12:55:44.177981 containerd[1598]: time="2026-01-27T12:55:44.177771163Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:55:44.179725 containerd[1598]: time="2026-01-27T12:55:44.179587291Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 27 12:55:44.179725 containerd[1598]: time="2026-01-27T12:55:44.179693309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 27 12:55:44.180315 kubelet[2768]: E0127 12:55:44.180273 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 12:55:44.180647 kubelet[2768]: E0127 12:55:44.180419 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 12:55:44.180774 kubelet[2768]: E0127 12:55:44.180695 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-9dc77d7c4-lxzpr_calico-system(7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 27 12:55:44.180813 kubelet[2768]: E0127 12:55:44.180763 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9dc77d7c4-lxzpr" podUID="7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac" Jan 27 12:55:44.986761 kubelet[2768]: E0127 12:55:44.986529 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:55:45.988519 containerd[1598]: time="2026-01-27T12:55:45.988440754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 27 12:55:46.055955 containerd[1598]: time="2026-01-27T12:55:46.055769763Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:55:46.057923 containerd[1598]: time="2026-01-27T12:55:46.057816742Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 27 12:55:46.058119 containerd[1598]: time="2026-01-27T12:55:46.057884524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 27 12:55:46.058343 kubelet[2768]: E0127 12:55:46.058299 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 12:55:46.058343 kubelet[2768]: E0127 12:55:46.058336 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 12:55:46.058732 kubelet[2768]: E0127 12:55:46.058431 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5d95ff6778-flxqp_calico-system(518046d9-b7bc-493b-96b2-44b9979317ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 27 12:55:46.058732 kubelet[2768]: E0127 12:55:46.058460 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d95ff6778-flxqp" podUID="518046d9-b7bc-493b-96b2-44b9979317ed" Jan 27 12:55:46.986821 kubelet[2768]: E0127 12:55:46.986774 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:55:47.727389 systemd[1]: Started sshd@14-10.0.0.130:22-10.0.0.1:54436.service - OpenSSH per-connection server daemon (10.0.0.1:54436). Jan 27 12:55:47.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.130:22-10.0.0.1:54436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:47.738031 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 12:55:47.738216 kernel: audit: type=1130 audit(1769518547.726:797): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.130:22-10.0.0.1:54436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:47.837000 audit[5186]: USER_ACCT pid=5186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:47.839272 sshd[5186]: Accepted publickey for core from 10.0.0.1 port 54436 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:55:47.842734 sshd-session[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:55:47.849828 systemd-logind[1575]: New session 16 of user core. Jan 27 12:55:47.840000 audit[5186]: CRED_ACQ pid=5186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:47.861823 kernel: audit: type=1101 audit(1769518547.837:798): pid=5186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:47.862016 kernel: audit: type=1103 audit(1769518547.840:799): pid=5186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:47.862047 kernel: audit: type=1006 audit(1769518547.840:800): pid=5186 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 27 12:55:47.868695 kernel: audit: type=1300 audit(1769518547.840:800): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed3941f00 a2=3 a3=0 items=0 ppid=1 pid=5186 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:55:47.840000 audit[5186]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed3941f00 a2=3 a3=0 items=0 ppid=1 pid=5186 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:55:47.881520 kernel: audit: type=1327 audit(1769518547.840:800): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:55:47.840000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:55:47.891708 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 27 12:55:47.897000 audit[5186]: USER_START pid=5186 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:47.899000 audit[5190]: CRED_ACQ pid=5190 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:47.922692 kernel: audit: type=1105 audit(1769518547.897:801): pid=5186 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:47.922766 kernel: audit: type=1103 audit(1769518547.899:802): pid=5190 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:48.048252 sshd[5190]: Connection closed by 10.0.0.1 port 54436 Jan 27 12:55:48.049034 sshd-session[5186]: pam_unix(sshd:session): session closed for user core Jan 27 12:55:48.049000 audit[5186]: USER_END pid=5186 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:48.055161 systemd-logind[1575]: Session 16 logged out. Waiting for processes to exit. Jan 27 12:55:48.056307 systemd[1]: sshd@14-10.0.0.130:22-10.0.0.1:54436.service: Deactivated successfully. Jan 27 12:55:48.060497 systemd[1]: session-16.scope: Deactivated successfully. Jan 27 12:55:48.064201 systemd-logind[1575]: Removed session 16. Jan 27 12:55:48.050000 audit[5186]: CRED_DISP pid=5186 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:48.080321 kernel: audit: type=1106 audit(1769518548.049:803): pid=5186 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:48.080385 kernel: audit: type=1104 audit(1769518548.050:804): pid=5186 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:48.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.130:22-10.0.0.1:54436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:48.989534 containerd[1598]: time="2026-01-27T12:55:48.989443256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 12:55:49.072095 containerd[1598]: time="2026-01-27T12:55:49.071832339Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:55:49.073756 containerd[1598]: time="2026-01-27T12:55:49.073663546Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 12:55:49.073756 containerd[1598]: time="2026-01-27T12:55:49.073712540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 12:55:49.074213 kubelet[2768]: E0127 12:55:49.074170 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 12:55:49.076032 kubelet[2768]: E0127 12:55:49.074223 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 12:55:49.076032 kubelet[2768]: E0127 12:55:49.074843 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6df48b7979-8w89r_calico-apiserver(32d8681f-2b1f-4fad-bc6d-7656e61dae7d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 12:55:49.076032 kubelet[2768]: E0127 12:55:49.074993 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-8w89r" podUID="32d8681f-2b1f-4fad-bc6d-7656e61dae7d" Jan 27 12:55:49.076176 containerd[1598]: time="2026-01-27T12:55:49.075466810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 27 12:55:49.220789 containerd[1598]: time="2026-01-27T12:55:49.220689820Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:55:49.222657 containerd[1598]: time="2026-01-27T12:55:49.222458179Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 27 12:55:49.222657 containerd[1598]: time="2026-01-27T12:55:49.222511664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 27 12:55:49.223005 kubelet[2768]: E0127 12:55:49.222814 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 12:55:49.223005 kubelet[2768]: E0127 12:55:49.222886 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 12:55:49.223088 kubelet[2768]: E0127 12:55:49.223068 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-mtm9p_calico-system(13a845a0-aaa5-4e80-8a2f-691163970ae8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 27 12:55:49.223227 kubelet[2768]: E0127 12:55:49.223142 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mtm9p" podUID="13a845a0-aaa5-4e80-8a2f-691163970ae8" Jan 27 12:55:49.989429 containerd[1598]: time="2026-01-27T12:55:49.988844668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 27 12:55:50.081330 containerd[1598]: time="2026-01-27T12:55:50.081242524Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:55:50.083253 containerd[1598]: time="2026-01-27T12:55:50.083146814Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 27 12:55:50.083374 containerd[1598]: time="2026-01-27T12:55:50.083269304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 27 12:55:50.083796 kubelet[2768]: E0127 12:55:50.083598 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 12:55:50.083796 kubelet[2768]: E0127 12:55:50.083742 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 12:55:50.084331 kubelet[2768]: E0127 12:55:50.083823 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-5vwvj_calico-system(6af69036-827e-49bb-8e7c-3940b856830f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 27 12:55:50.086213 containerd[1598]: time="2026-01-27T12:55:50.085138841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 27 12:55:50.159974 containerd[1598]: time="2026-01-27T12:55:50.159737141Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:55:50.162141 containerd[1598]: time="2026-01-27T12:55:50.161877684Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 27 12:55:50.162141 containerd[1598]: time="2026-01-27T12:55:50.162132781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 27 12:55:50.162468 kubelet[2768]: E0127 12:55:50.162354 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 12:55:50.162468 kubelet[2768]: E0127 12:55:50.162458 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 12:55:50.162695 kubelet[2768]: E0127 12:55:50.162585 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-5vwvj_calico-system(6af69036-827e-49bb-8e7c-3940b856830f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 27 12:55:50.163978 kubelet[2768]: E0127 12:55:50.162745 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5vwvj" podUID="6af69036-827e-49bb-8e7c-3940b856830f" Jan 27 12:55:52.990806 containerd[1598]: time="2026-01-27T12:55:52.990702565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 12:55:53.059177 containerd[1598]: time="2026-01-27T12:55:53.059053139Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 12:55:53.062536 containerd[1598]: time="2026-01-27T12:55:53.062405925Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 12:55:53.062536 containerd[1598]: time="2026-01-27T12:55:53.062485544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 12:55:53.062764 kubelet[2768]: E0127 12:55:53.062680 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 12:55:53.062764 kubelet[2768]: E0127 12:55:53.062729 2768 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 12:55:53.064066 kubelet[2768]: E0127 12:55:53.062810 2768 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6df48b7979-cgdx9_calico-apiserver(e6d3c258-6f1e-4868-8f36-862014b4b2fc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 12:55:53.064066 kubelet[2768]: E0127 12:55:53.062849 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-cgdx9" podUID="e6d3c258-6f1e-4868-8f36-862014b4b2fc" Jan 27 12:55:53.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.130:22-10.0.0.1:59748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:53.067189 systemd[1]: Started sshd@15-10.0.0.130:22-10.0.0.1:59748.service - OpenSSH per-connection server daemon (10.0.0.1:59748). Jan 27 12:55:53.070044 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 12:55:53.070089 kernel: audit: type=1130 audit(1769518553.065:806): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.130:22-10.0.0.1:59748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:53.161000 audit[5204]: USER_ACCT pid=5204 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:53.163511 sshd[5204]: Accepted publickey for core from 10.0.0.1 port 59748 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:55:53.166196 sshd-session[5204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:55:53.172017 systemd-logind[1575]: New session 17 of user core. Jan 27 12:55:53.163000 audit[5204]: CRED_ACQ pid=5204 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:53.185011 kernel: audit: type=1101 audit(1769518553.161:807): pid=5204 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:53.185076 kernel: audit: type=1103 audit(1769518553.163:808): pid=5204 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:53.185095 kernel: audit: type=1006 audit(1769518553.163:809): pid=5204 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 27 12:55:53.163000 audit[5204]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6c43cbe0 a2=3 a3=0 items=0 ppid=1 pid=5204 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:55:53.216374 kernel: audit: type=1300 audit(1769518553.163:809): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6c43cbe0 a2=3 a3=0 items=0 ppid=1 pid=5204 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:55:53.216548 kernel: audit: type=1327 audit(1769518553.163:809): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:55:53.163000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:55:53.226447 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 27 12:55:53.235000 audit[5204]: USER_START pid=5204 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:53.257953 kernel: audit: type=1105 audit(1769518553.235:810): pid=5204 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:53.240000 audit[5208]: CRED_ACQ pid=5208 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:53.280041 kernel: audit: type=1103 audit(1769518553.240:811): pid=5208 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:53.400602 sshd[5208]: Connection closed by 10.0.0.1 port 59748 Jan 27 12:55:53.402171 sshd-session[5204]: pam_unix(sshd:session): session closed for user core Jan 27 12:55:53.402000 audit[5204]: USER_END pid=5204 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:53.413413 systemd[1]: sshd@15-10.0.0.130:22-10.0.0.1:59748.service: Deactivated successfully. Jan 27 12:55:53.417776 systemd[1]: session-17.scope: Deactivated successfully. Jan 27 12:55:53.402000 audit[5204]: CRED_DISP pid=5204 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:53.423068 systemd-logind[1575]: Session 17 logged out. Waiting for processes to exit. Jan 27 12:55:53.430858 kernel: audit: type=1106 audit(1769518553.402:812): pid=5204 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:53.431021 kernel: audit: type=1104 audit(1769518553.402:813): pid=5204 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:53.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.130:22-10.0.0.1:59748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:53.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.130:22-10.0.0.1:59762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:53.429889 systemd[1]: Started sshd@16-10.0.0.130:22-10.0.0.1:59762.service - OpenSSH per-connection server daemon (10.0.0.1:59762). Jan 27 12:55:53.433187 systemd-logind[1575]: Removed session 17. Jan 27 12:55:53.499000 audit[5223]: USER_ACCT pid=5223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:53.501602 sshd[5223]: Accepted publickey for core from 10.0.0.1 port 59762 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:55:53.500000 audit[5223]: CRED_ACQ pid=5223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:53.500000 audit[5223]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe005cb2c0 a2=3 a3=0 items=0 ppid=1 pid=5223 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:55:53.500000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:55:53.505354 sshd-session[5223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:55:53.514545 systemd-logind[1575]: New session 18 of user core. Jan 27 12:55:53.521369 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 27 12:55:53.524000 audit[5223]: USER_START pid=5223 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:53.527000 audit[5227]: CRED_ACQ pid=5227 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:53.724082 sshd[5227]: Connection closed by 10.0.0.1 port 59762 Jan 27 12:55:53.725521 sshd-session[5223]: pam_unix(sshd:session): session closed for user core Jan 27 12:55:53.731000 audit[5223]: USER_END pid=5223 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:53.731000 audit[5223]: CRED_DISP pid=5223 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:53.742366 systemd[1]: sshd@16-10.0.0.130:22-10.0.0.1:59762.service: Deactivated successfully. Jan 27 12:55:53.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.130:22-10.0.0.1:59762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:53.746417 systemd[1]: session-18.scope: Deactivated successfully. Jan 27 12:55:53.750816 systemd-logind[1575]: Session 18 logged out. Waiting for processes to exit. Jan 27 12:55:53.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.130:22-10.0.0.1:59768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:53.755192 systemd[1]: Started sshd@17-10.0.0.130:22-10.0.0.1:59768.service - OpenSSH per-connection server daemon (10.0.0.1:59768). Jan 27 12:55:53.760781 systemd-logind[1575]: Removed session 18. Jan 27 12:55:53.841525 sshd[5239]: Accepted publickey for core from 10.0.0.1 port 59768 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:55:53.839000 audit[5239]: USER_ACCT pid=5239 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:53.841000 audit[5239]: CRED_ACQ pid=5239 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:53.842000 audit[5239]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdadc613a0 a2=3 a3=0 items=0 ppid=1 pid=5239 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:55:53.842000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:55:53.845408 sshd-session[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:55:53.856096 systemd-logind[1575]: New session 19 of user core. Jan 27 12:55:53.862418 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 27 12:55:53.867000 audit[5239]: USER_START pid=5239 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:53.873000 audit[5243]: CRED_ACQ pid=5243 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:54.001481 sshd[5243]: Connection closed by 10.0.0.1 port 59768 Jan 27 12:55:54.002376 sshd-session[5239]: pam_unix(sshd:session): session closed for user core Jan 27 12:55:54.003000 audit[5239]: USER_END pid=5239 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:54.003000 audit[5239]: CRED_DISP pid=5239 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:54.010426 systemd[1]: sshd@17-10.0.0.130:22-10.0.0.1:59768.service: Deactivated successfully. Jan 27 12:55:54.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.130:22-10.0.0.1:59768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:54.019580 systemd[1]: session-19.scope: Deactivated successfully. Jan 27 12:55:54.021584 systemd-logind[1575]: Session 19 logged out. Waiting for processes to exit. Jan 27 12:55:54.025861 systemd-logind[1575]: Removed session 19. Jan 27 12:55:57.988451 kubelet[2768]: E0127 12:55:57.988258 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d95ff6778-flxqp" podUID="518046d9-b7bc-493b-96b2-44b9979317ed" Jan 27 12:55:57.990982 kubelet[2768]: E0127 12:55:57.990760 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9dc77d7c4-lxzpr" podUID="7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac" Jan 27 12:55:59.022737 systemd[1]: Started sshd@18-10.0.0.130:22-10.0.0.1:59776.service - OpenSSH per-connection server daemon (10.0.0.1:59776). Jan 27 12:55:59.023225 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 27 12:55:59.023280 kernel: audit: type=1130 audit(1769518559.020:833): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.130:22-10.0.0.1:59776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:59.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.130:22-10.0.0.1:59776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:59.104000 audit[5256]: USER_ACCT pid=5256 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:59.107159 sshd[5256]: Accepted publickey for core from 10.0.0.1 port 59776 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:55:59.109450 sshd-session[5256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:55:59.119714 systemd-logind[1575]: New session 20 of user core. Jan 27 12:55:59.106000 audit[5256]: CRED_ACQ pid=5256 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:59.139400 kernel: audit: type=1101 audit(1769518559.104:834): pid=5256 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:59.139495 kernel: audit: type=1103 audit(1769518559.106:835): pid=5256 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:59.139514 kernel: audit: type=1006 audit(1769518559.106:836): pid=5256 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 27 12:55:59.149727 kernel: audit: type=1300 audit(1769518559.106:836): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4463e420 a2=3 a3=0 items=0 ppid=1 pid=5256 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:55:59.106000 audit[5256]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4463e420 a2=3 a3=0 items=0 ppid=1 pid=5256 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:55:59.106000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:55:59.174561 kernel: audit: type=1327 audit(1769518559.106:836): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:55:59.180384 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 27 12:55:59.183000 audit[5256]: USER_START pid=5256 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:59.185000 audit[5260]: CRED_ACQ pid=5260 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:59.218734 kernel: audit: type=1105 audit(1769518559.183:837): pid=5256 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:59.218845 kernel: audit: type=1103 audit(1769518559.185:838): pid=5260 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:59.321407 sshd[5260]: Connection closed by 10.0.0.1 port 59776 Jan 27 12:55:59.321585 sshd-session[5256]: pam_unix(sshd:session): session closed for user core Jan 27 12:55:59.322000 audit[5256]: USER_END pid=5256 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:59.327110 systemd[1]: sshd@18-10.0.0.130:22-10.0.0.1:59776.service: Deactivated successfully. Jan 27 12:55:59.330574 systemd[1]: session-20.scope: Deactivated successfully. Jan 27 12:55:59.333099 systemd-logind[1575]: Session 20 logged out. Waiting for processes to exit. Jan 27 12:55:59.336100 systemd-logind[1575]: Removed session 20. Jan 27 12:55:59.339079 kernel: audit: type=1106 audit(1769518559.322:839): pid=5256 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:59.339127 kernel: audit: type=1104 audit(1769518559.322:840): pid=5256 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:59.322000 audit[5256]: CRED_DISP pid=5256 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:55:59.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.130:22-10.0.0.1:59776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:55:59.989307 kubelet[2768]: E0127 12:55:59.988801 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mtm9p" podUID="13a845a0-aaa5-4e80-8a2f-691163970ae8" Jan 27 12:56:02.989736 kubelet[2768]: E0127 12:56:02.989085 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-8w89r" podUID="32d8681f-2b1f-4fad-bc6d-7656e61dae7d" Jan 27 12:56:04.345291 systemd[1]: Started sshd@19-10.0.0.130:22-10.0.0.1:57986.service - OpenSSH per-connection server daemon (10.0.0.1:57986). Jan 27 12:56:04.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.130:22-10.0.0.1:57986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:04.350042 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 12:56:04.350131 kernel: audit: type=1130 audit(1769518564.343:842): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.130:22-10.0.0.1:57986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:04.429375 sshd[5274]: Accepted publickey for core from 10.0.0.1 port 57986 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:56:04.427000 audit[5274]: USER_ACCT pid=5274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:04.432307 sshd-session[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:56:04.440300 systemd-logind[1575]: New session 21 of user core. Jan 27 12:56:04.428000 audit[5274]: CRED_ACQ pid=5274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:04.466999 kernel: audit: type=1101 audit(1769518564.427:843): pid=5274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:04.467077 kernel: audit: type=1103 audit(1769518564.428:844): pid=5274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:04.467110 kernel: audit: type=1006 audit(1769518564.429:845): pid=5274 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 27 12:56:04.429000 audit[5274]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc19af470 a2=3 a3=0 items=0 ppid=1 pid=5274 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:04.498770 kernel: audit: type=1300 audit(1769518564.429:845): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc19af470 a2=3 a3=0 items=0 ppid=1 pid=5274 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:04.498866 kernel: audit: type=1327 audit(1769518564.429:845): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:56:04.429000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:56:04.508361 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 27 12:56:04.512000 audit[5274]: USER_START pid=5274 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:04.514000 audit[5278]: CRED_ACQ pid=5278 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:04.538721 kernel: audit: type=1105 audit(1769518564.512:846): pid=5274 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:04.538807 kernel: audit: type=1103 audit(1769518564.514:847): pid=5278 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:04.639065 sshd[5278]: Connection closed by 10.0.0.1 port 57986 Jan 27 12:56:04.639720 sshd-session[5274]: pam_unix(sshd:session): session closed for user core Jan 27 12:56:04.642000 audit[5274]: USER_END pid=5274 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:04.647544 systemd[1]: sshd@19-10.0.0.130:22-10.0.0.1:57986.service: Deactivated successfully. Jan 27 12:56:04.654189 systemd[1]: session-21.scope: Deactivated successfully. Jan 27 12:56:04.657412 systemd-logind[1575]: Session 21 logged out. Waiting for processes to exit. Jan 27 12:56:04.660299 systemd-logind[1575]: Removed session 21. Jan 27 12:56:04.642000 audit[5274]: CRED_DISP pid=5274 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:04.675515 kernel: audit: type=1106 audit(1769518564.642:848): pid=5274 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:04.675768 kernel: audit: type=1104 audit(1769518564.642:849): pid=5274 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:04.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.130:22-10.0.0.1:57986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:04.992386 kubelet[2768]: E0127 12:56:04.992199 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5vwvj" podUID="6af69036-827e-49bb-8e7c-3940b856830f" Jan 27 12:56:06.992548 kubelet[2768]: E0127 12:56:06.989048 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-cgdx9" podUID="e6d3c258-6f1e-4868-8f36-862014b4b2fc" Jan 27 12:56:07.986723 kubelet[2768]: E0127 12:56:07.986561 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:56:08.989429 kubelet[2768]: E0127 12:56:08.989332 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d95ff6778-flxqp" podUID="518046d9-b7bc-493b-96b2-44b9979317ed" Jan 27 12:56:08.990470 kubelet[2768]: E0127 12:56:08.990415 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9dc77d7c4-lxzpr" podUID="7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac" Jan 27 12:56:09.660453 systemd[1]: Started sshd@20-10.0.0.130:22-10.0.0.1:58002.service - OpenSSH per-connection server daemon (10.0.0.1:58002). Jan 27 12:56:09.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.130:22-10.0.0.1:58002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:09.666455 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 12:56:09.666514 kernel: audit: type=1130 audit(1769518569.659:851): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.130:22-10.0.0.1:58002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:09.768000 audit[5292]: USER_ACCT pid=5292 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:09.771239 sshd[5292]: Accepted publickey for core from 10.0.0.1 port 58002 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:56:09.776710 sshd-session[5292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:56:09.768000 audit[5292]: CRED_ACQ pid=5292 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:09.787774 systemd-logind[1575]: New session 22 of user core. Jan 27 12:56:09.797267 kernel: audit: type=1101 audit(1769518569.768:852): pid=5292 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:09.797331 kernel: audit: type=1103 audit(1769518569.768:853): pid=5292 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:09.797359 kernel: audit: type=1006 audit(1769518569.768:854): pid=5292 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 27 12:56:09.768000 audit[5292]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe396f17b0 a2=3 a3=0 items=0 ppid=1 pid=5292 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:09.824833 kernel: audit: type=1300 audit(1769518569.768:854): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe396f17b0 a2=3 a3=0 items=0 ppid=1 pid=5292 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:09.827289 kernel: audit: type=1327 audit(1769518569.768:854): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:56:09.768000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:56:09.825446 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 27 12:56:09.830000 audit[5292]: USER_START pid=5292 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:09.837000 audit[5296]: CRED_ACQ pid=5296 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:09.869674 kernel: audit: type=1105 audit(1769518569.830:855): pid=5292 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:09.869789 kernel: audit: type=1103 audit(1769518569.837:856): pid=5296 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:09.969671 sshd[5296]: Connection closed by 10.0.0.1 port 58002 Jan 27 12:56:09.970125 sshd-session[5292]: pam_unix(sshd:session): session closed for user core Jan 27 12:56:09.971000 audit[5292]: USER_END pid=5292 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:09.978129 systemd[1]: sshd@20-10.0.0.130:22-10.0.0.1:58002.service: Deactivated successfully. Jan 27 12:56:09.981493 systemd[1]: session-22.scope: Deactivated successfully. Jan 27 12:56:09.988874 systemd-logind[1575]: Session 22 logged out. Waiting for processes to exit. Jan 27 12:56:09.971000 audit[5292]: CRED_DISP pid=5292 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:09.993296 systemd-logind[1575]: Removed session 22. Jan 27 12:56:10.007132 kernel: audit: type=1106 audit(1769518569.971:857): pid=5292 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:10.007192 kernel: audit: type=1104 audit(1769518569.971:858): pid=5292 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:09.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.130:22-10.0.0.1:58002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:13.987989 kubelet[2768]: E0127 12:56:13.987594 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:56:15.002837 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 12:56:15.003063 kernel: audit: type=1130 audit(1769518574.985:860): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.130:22-10.0.0.1:51246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:14.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.130:22-10.0.0.1:51246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:15.003178 kubelet[2768]: E0127 12:56:14.989512 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mtm9p" podUID="13a845a0-aaa5-4e80-8a2f-691163970ae8" Jan 27 12:56:14.987593 systemd[1]: Started sshd@21-10.0.0.130:22-10.0.0.1:51246.service - OpenSSH per-connection server daemon (10.0.0.1:51246). Jan 27 12:56:15.077000 audit[5341]: USER_ACCT pid=5341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:15.079444 sshd[5341]: Accepted publickey for core from 10.0.0.1 port 51246 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:56:15.082357 sshd-session[5341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:56:15.092986 systemd-logind[1575]: New session 23 of user core. Jan 27 12:56:15.079000 audit[5341]: CRED_ACQ pid=5341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:15.116072 kernel: audit: type=1101 audit(1769518575.077:861): pid=5341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:15.116159 kernel: audit: type=1103 audit(1769518575.079:862): pid=5341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:15.116213 kernel: audit: type=1006 audit(1769518575.079:863): pid=5341 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 27 12:56:15.079000 audit[5341]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef7b08490 a2=3 a3=0 items=0 ppid=1 pid=5341 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:15.149120 kernel: audit: type=1300 audit(1769518575.079:863): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef7b08490 a2=3 a3=0 items=0 ppid=1 pid=5341 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:15.149341 kernel: audit: type=1327 audit(1769518575.079:863): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:56:15.079000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:56:15.161103 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 27 12:56:15.166000 audit[5341]: USER_START pid=5341 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:15.168000 audit[5345]: CRED_ACQ pid=5345 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:15.206799 kernel: audit: type=1105 audit(1769518575.166:864): pid=5341 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:15.207059 kernel: audit: type=1103 audit(1769518575.168:865): pid=5345 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:15.298378 sshd[5345]: Connection closed by 10.0.0.1 port 51246 Jan 27 12:56:15.298792 sshd-session[5341]: pam_unix(sshd:session): session closed for user core Jan 27 12:56:15.300000 audit[5341]: USER_END pid=5341 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:15.307261 systemd[1]: sshd@21-10.0.0.130:22-10.0.0.1:51246.service: Deactivated successfully. Jan 27 12:56:15.312485 systemd[1]: session-23.scope: Deactivated successfully. Jan 27 12:56:15.314479 systemd-logind[1575]: Session 23 logged out. Waiting for processes to exit. Jan 27 12:56:15.317212 systemd-logind[1575]: Removed session 23. Jan 27 12:56:15.300000 audit[5341]: CRED_DISP pid=5341 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:15.345736 kernel: audit: type=1106 audit(1769518575.300:866): pid=5341 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:15.345808 kernel: audit: type=1104 audit(1769518575.300:867): pid=5341 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:15.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.130:22-10.0.0.1:51246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:16.989271 kubelet[2768]: E0127 12:56:16.989134 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-8w89r" podUID="32d8681f-2b1f-4fad-bc6d-7656e61dae7d" Jan 27 12:56:17.987571 kubelet[2768]: E0127 12:56:17.987488 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:56:17.990013 kubelet[2768]: E0127 12:56:17.989448 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5vwvj" podUID="6af69036-827e-49bb-8e7c-3940b856830f" Jan 27 12:56:19.988250 kubelet[2768]: E0127 12:56:19.988178 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-cgdx9" podUID="e6d3c258-6f1e-4868-8f36-862014b4b2fc" Jan 27 12:56:20.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.130:22-10.0.0.1:51256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:20.318246 systemd[1]: Started sshd@22-10.0.0.130:22-10.0.0.1:51256.service - OpenSSH per-connection server daemon (10.0.0.1:51256). Jan 27 12:56:20.320550 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 12:56:20.320673 kernel: audit: type=1130 audit(1769518580.316:869): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.130:22-10.0.0.1:51256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:20.384000 audit[5358]: USER_ACCT pid=5358 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:20.386845 sshd[5358]: Accepted publickey for core from 10.0.0.1 port 51256 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:56:20.390292 sshd-session[5358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:56:20.396173 kernel: audit: type=1101 audit(1769518580.384:870): pid=5358 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:20.386000 audit[5358]: CRED_ACQ pid=5358 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:20.402391 systemd-logind[1575]: New session 24 of user core. Jan 27 12:56:20.411748 kernel: audit: type=1103 audit(1769518580.386:871): pid=5358 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:20.411836 kernel: audit: type=1006 audit(1769518580.386:872): pid=5358 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 27 12:56:20.411972 kernel: audit: type=1300 audit(1769518580.386:872): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe11c6a2b0 a2=3 a3=0 items=0 ppid=1 pid=5358 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:20.386000 audit[5358]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe11c6a2b0 a2=3 a3=0 items=0 ppid=1 pid=5358 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:20.386000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:56:20.430001 kernel: audit: type=1327 audit(1769518580.386:872): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:56:20.432295 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 27 12:56:20.435000 audit[5358]: USER_START pid=5358 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:20.453996 kernel: audit: type=1105 audit(1769518580.435:873): pid=5358 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:20.454080 kernel: audit: type=1103 audit(1769518580.438:874): pid=5362 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:20.438000 audit[5362]: CRED_ACQ pid=5362 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:20.546780 sshd[5362]: Connection closed by 10.0.0.1 port 51256 Jan 27 12:56:20.547789 sshd-session[5358]: pam_unix(sshd:session): session closed for user core Jan 27 12:56:20.547000 audit[5358]: USER_END pid=5358 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:20.556197 systemd[1]: sshd@22-10.0.0.130:22-10.0.0.1:51256.service: Deactivated successfully. Jan 27 12:56:20.560836 systemd[1]: session-24.scope: Deactivated successfully. Jan 27 12:56:20.563851 systemd-logind[1575]: Session 24 logged out. Waiting for processes to exit. Jan 27 12:56:20.564967 kernel: audit: type=1106 audit(1769518580.547:875): pid=5358 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:20.565043 kernel: audit: type=1104 audit(1769518580.547:876): pid=5358 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:20.547000 audit[5358]: CRED_DISP pid=5358 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:20.566663 systemd-logind[1575]: Removed session 24. Jan 27 12:56:20.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.130:22-10.0.0.1:51256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:20.989022 kubelet[2768]: E0127 12:56:20.988716 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d95ff6778-flxqp" podUID="518046d9-b7bc-493b-96b2-44b9979317ed" Jan 27 12:56:22.987175 kubelet[2768]: E0127 12:56:22.987092 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:56:23.988564 kubelet[2768]: E0127 12:56:23.988485 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9dc77d7c4-lxzpr" podUID="7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac" Jan 27 12:56:25.567707 systemd[1]: Started sshd@23-10.0.0.130:22-10.0.0.1:38750.service - OpenSSH per-connection server daemon (10.0.0.1:38750). Jan 27 12:56:25.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.130:22-10.0.0.1:38750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:25.571397 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 12:56:25.571602 kernel: audit: type=1130 audit(1769518585.566:878): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.130:22-10.0.0.1:38750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:25.651000 audit[5377]: USER_ACCT pid=5377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:25.654966 sshd[5377]: Accepted publickey for core from 10.0.0.1 port 38750 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:56:25.656747 sshd-session[5377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:56:25.653000 audit[5377]: CRED_ACQ pid=5377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:25.665988 systemd-logind[1575]: New session 25 of user core. Jan 27 12:56:25.678037 kernel: audit: type=1101 audit(1769518585.651:879): pid=5377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:25.678177 kernel: audit: type=1103 audit(1769518585.653:880): pid=5377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:25.678210 kernel: audit: type=1006 audit(1769518585.653:881): pid=5377 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 27 12:56:25.653000 audit[5377]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff813a3920 a2=3 a3=0 items=0 ppid=1 pid=5377 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:25.697970 kernel: audit: type=1300 audit(1769518585.653:881): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff813a3920 a2=3 a3=0 items=0 ppid=1 pid=5377 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:25.698044 kernel: audit: type=1327 audit(1769518585.653:881): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:56:25.653000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:56:25.712220 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 27 12:56:25.715000 audit[5377]: USER_START pid=5377 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:25.718000 audit[5381]: CRED_ACQ pid=5381 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:25.738581 kernel: audit: type=1105 audit(1769518585.715:882): pid=5377 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:25.738774 kernel: audit: type=1103 audit(1769518585.718:883): pid=5381 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:25.820701 sshd[5381]: Connection closed by 10.0.0.1 port 38750 Jan 27 12:56:25.820002 sshd-session[5377]: pam_unix(sshd:session): session closed for user core Jan 27 12:56:25.820000 audit[5377]: USER_END pid=5377 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:25.820000 audit[5377]: CRED_DISP pid=5377 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:25.842183 kernel: audit: type=1106 audit(1769518585.820:884): pid=5377 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:25.842276 kernel: audit: type=1104 audit(1769518585.820:885): pid=5377 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:25.852105 systemd[1]: sshd@23-10.0.0.130:22-10.0.0.1:38750.service: Deactivated successfully. Jan 27 12:56:25.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.130:22-10.0.0.1:38750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:25.854283 systemd[1]: session-25.scope: Deactivated successfully. Jan 27 12:56:25.855513 systemd-logind[1575]: Session 25 logged out. Waiting for processes to exit. Jan 27 12:56:25.857468 systemd-logind[1575]: Removed session 25. Jan 27 12:56:26.987763 kubelet[2768]: E0127 12:56:26.986863 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:56:29.987935 kubelet[2768]: E0127 12:56:29.987823 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mtm9p" podUID="13a845a0-aaa5-4e80-8a2f-691163970ae8" Jan 27 12:56:30.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.130:22-10.0.0.1:38760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:30.863510 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 12:56:30.863580 kernel: audit: type=1130 audit(1769518590.858:887): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.130:22-10.0.0.1:38760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:30.859969 systemd[1]: Started sshd@24-10.0.0.130:22-10.0.0.1:38760.service - OpenSSH per-connection server daemon (10.0.0.1:38760). Jan 27 12:56:30.962000 audit[5396]: USER_ACCT pid=5396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:30.966512 sshd[5396]: Accepted publickey for core from 10.0.0.1 port 38760 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:56:30.969422 sshd-session[5396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:56:30.965000 audit[5396]: CRED_ACQ pid=5396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:30.980239 systemd-logind[1575]: New session 26 of user core. Jan 27 12:56:30.983541 kernel: audit: type=1101 audit(1769518590.962:888): pid=5396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:30.983618 kernel: audit: type=1103 audit(1769518590.965:889): pid=5396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:30.983718 kernel: audit: type=1006 audit(1769518590.965:890): pid=5396 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 27 12:56:30.990326 kubelet[2768]: E0127 12:56:30.990250 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5vwvj" podUID="6af69036-827e-49bb-8e7c-3940b856830f" Jan 27 12:56:30.992072 kernel: audit: type=1300 audit(1769518590.965:890): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa4aee3f0 a2=3 a3=0 items=0 ppid=1 pid=5396 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:30.965000 audit[5396]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa4aee3f0 a2=3 a3=0 items=0 ppid=1 pid=5396 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:30.965000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:56:31.009526 kernel: audit: type=1327 audit(1769518590.965:890): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:56:31.011217 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 27 12:56:31.014000 audit[5396]: USER_START pid=5396 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:31.017000 audit[5400]: CRED_ACQ pid=5400 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:31.043605 kernel: audit: type=1105 audit(1769518591.014:891): pid=5396 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:31.043735 kernel: audit: type=1103 audit(1769518591.017:892): pid=5400 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:31.132543 sshd[5400]: Connection closed by 10.0.0.1 port 38760 Jan 27 12:56:31.132887 sshd-session[5396]: pam_unix(sshd:session): session closed for user core Jan 27 12:56:31.133000 audit[5396]: USER_END pid=5396 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:31.139525 systemd[1]: sshd@24-10.0.0.130:22-10.0.0.1:38760.service: Deactivated successfully. Jan 27 12:56:31.143216 systemd[1]: session-26.scope: Deactivated successfully. Jan 27 12:56:31.146100 systemd-logind[1575]: Session 26 logged out. Waiting for processes to exit. Jan 27 12:56:31.148587 systemd-logind[1575]: Removed session 26. Jan 27 12:56:31.133000 audit[5396]: CRED_DISP pid=5396 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:31.164837 kernel: audit: type=1106 audit(1769518591.133:893): pid=5396 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:31.164964 kernel: audit: type=1104 audit(1769518591.133:894): pid=5396 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:31.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.130:22-10.0.0.1:38760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:31.989280 kubelet[2768]: E0127 12:56:31.989143 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d95ff6778-flxqp" podUID="518046d9-b7bc-493b-96b2-44b9979317ed" Jan 27 12:56:31.989280 kubelet[2768]: E0127 12:56:31.989248 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-8w89r" podUID="32d8681f-2b1f-4fad-bc6d-7656e61dae7d" Jan 27 12:56:32.988508 kubelet[2768]: E0127 12:56:32.988277 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-cgdx9" podUID="e6d3c258-6f1e-4868-8f36-862014b4b2fc" Jan 27 12:56:36.151066 systemd[1]: Started sshd@25-10.0.0.130:22-10.0.0.1:58386.service - OpenSSH per-connection server daemon (10.0.0.1:58386). Jan 27 12:56:36.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.130:22-10.0.0.1:58386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:36.158968 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 12:56:36.159028 kernel: audit: type=1130 audit(1769518596.149:896): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.130:22-10.0.0.1:58386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:36.221000 audit[5415]: USER_ACCT pid=5415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:36.224107 sshd[5415]: Accepted publickey for core from 10.0.0.1 port 58386 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:56:36.226327 sshd-session[5415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:56:36.223000 audit[5415]: CRED_ACQ pid=5415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:36.233990 systemd-logind[1575]: New session 27 of user core. Jan 27 12:56:36.244727 kernel: audit: type=1101 audit(1769518596.221:897): pid=5415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:36.244809 kernel: audit: type=1103 audit(1769518596.223:898): pid=5415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:36.244840 kernel: audit: type=1006 audit(1769518596.223:899): pid=5415 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 27 12:56:36.223000 audit[5415]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff38a15770 a2=3 a3=0 items=0 ppid=1 pid=5415 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:36.259964 kernel: audit: type=1300 audit(1769518596.223:899): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff38a15770 a2=3 a3=0 items=0 ppid=1 pid=5415 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:36.260025 kernel: audit: type=1327 audit(1769518596.223:899): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:56:36.223000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:56:36.261008 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 27 12:56:36.269000 audit[5415]: USER_START pid=5415 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:36.282982 kernel: audit: type=1105 audit(1769518596.269:900): pid=5415 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:36.272000 audit[5419]: CRED_ACQ pid=5419 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:36.299937 kernel: audit: type=1103 audit(1769518596.272:901): pid=5419 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:36.413561 sshd[5419]: Connection closed by 10.0.0.1 port 58386 Jan 27 12:56:36.415494 sshd-session[5415]: pam_unix(sshd:session): session closed for user core Jan 27 12:56:36.416000 audit[5415]: USER_END pid=5415 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:36.416000 audit[5415]: CRED_DISP pid=5415 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:36.453192 kernel: audit: type=1106 audit(1769518596.416:902): pid=5415 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:36.453301 kernel: audit: type=1104 audit(1769518596.416:903): pid=5415 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:36.460875 systemd[1]: sshd@25-10.0.0.130:22-10.0.0.1:58386.service: Deactivated successfully. Jan 27 12:56:36.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.130:22-10.0.0.1:58386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:36.466755 systemd[1]: session-27.scope: Deactivated successfully. Jan 27 12:56:36.470669 systemd-logind[1575]: Session 27 logged out. Waiting for processes to exit. Jan 27 12:56:36.477563 systemd[1]: Started sshd@26-10.0.0.130:22-10.0.0.1:58390.service - OpenSSH per-connection server daemon (10.0.0.1:58390). Jan 27 12:56:36.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.130:22-10.0.0.1:58390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:36.480806 systemd-logind[1575]: Removed session 27. Jan 27 12:56:36.571000 audit[5432]: USER_ACCT pid=5432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:36.575287 sshd[5432]: Accepted publickey for core from 10.0.0.1 port 58390 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:56:36.575000 audit[5432]: CRED_ACQ pid=5432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:36.575000 audit[5432]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff71ed8ee0 a2=3 a3=0 items=0 ppid=1 pid=5432 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:36.575000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:56:36.578025 sshd-session[5432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:56:36.584983 systemd-logind[1575]: New session 28 of user core. Jan 27 12:56:36.593452 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 27 12:56:36.597000 audit[5432]: USER_START pid=5432 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:36.602000 audit[5436]: CRED_ACQ pid=5436 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:36.921723 sshd[5436]: Connection closed by 10.0.0.1 port 58390 Jan 27 12:56:36.922685 sshd-session[5432]: pam_unix(sshd:session): session closed for user core Jan 27 12:56:36.927000 audit[5432]: USER_END pid=5432 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:36.927000 audit[5432]: CRED_DISP pid=5432 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:36.934842 systemd[1]: sshd@26-10.0.0.130:22-10.0.0.1:58390.service: Deactivated successfully. Jan 27 12:56:36.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.130:22-10.0.0.1:58390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:36.940510 systemd[1]: session-28.scope: Deactivated successfully. Jan 27 12:56:36.945035 systemd-logind[1575]: Session 28 logged out. Waiting for processes to exit. Jan 27 12:56:36.948385 systemd[1]: Started sshd@27-10.0.0.130:22-10.0.0.1:58392.service - OpenSSH per-connection server daemon (10.0.0.1:58392). Jan 27 12:56:36.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.130:22-10.0.0.1:58392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:36.950980 systemd-logind[1575]: Removed session 28. Jan 27 12:56:36.997415 kubelet[2768]: E0127 12:56:36.996960 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9dc77d7c4-lxzpr" podUID="7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac" Jan 27 12:56:37.050000 audit[5447]: USER_ACCT pid=5447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:37.052719 sshd[5447]: Accepted publickey for core from 10.0.0.1 port 58392 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:56:37.053000 audit[5447]: CRED_ACQ pid=5447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:37.053000 audit[5447]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff000e76f0 a2=3 a3=0 items=0 ppid=1 pid=5447 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:37.053000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:56:37.056860 sshd-session[5447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:56:37.064605 systemd-logind[1575]: New session 29 of user core. Jan 27 12:56:37.071156 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 27 12:56:37.072000 audit[5447]: USER_START pid=5447 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:37.074000 audit[5451]: CRED_ACQ pid=5451 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:37.761485 sshd[5451]: Connection closed by 10.0.0.1 port 58392 Jan 27 12:56:37.762732 sshd-session[5447]: pam_unix(sshd:session): session closed for user core Jan 27 12:56:37.769000 audit[5447]: USER_END pid=5447 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:37.769000 audit[5447]: CRED_DISP pid=5447 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:37.769000 audit[5465]: NETFILTER_CFG table=filter:140 family=2 entries=26 op=nft_register_rule pid=5465 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:56:37.769000 audit[5465]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc245700f0 a2=0 a3=7ffc245700dc items=0 ppid=2925 pid=5465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:37.769000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:56:37.774000 audit[5465]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=5465 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:56:37.774000 audit[5465]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc245700f0 a2=0 a3=0 items=0 ppid=2925 pid=5465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:37.774000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:56:37.783043 systemd[1]: sshd@27-10.0.0.130:22-10.0.0.1:58392.service: Deactivated successfully. Jan 27 12:56:37.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.130:22-10.0.0.1:58392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:37.785790 systemd[1]: session-29.scope: Deactivated successfully. Jan 27 12:56:37.788493 systemd-logind[1575]: Session 29 logged out. Waiting for processes to exit. Jan 27 12:56:37.792199 systemd[1]: Started sshd@28-10.0.0.130:22-10.0.0.1:58402.service - OpenSSH per-connection server daemon (10.0.0.1:58402). Jan 27 12:56:37.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.130:22-10.0.0.1:58402 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:37.795521 systemd-logind[1575]: Removed session 29. Jan 27 12:56:37.890000 audit[5470]: USER_ACCT pid=5470 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:37.894000 audit[5470]: CRED_ACQ pid=5470 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:37.896579 sshd[5470]: Accepted publickey for core from 10.0.0.1 port 58402 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:56:37.895000 audit[5470]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc53a28360 a2=3 a3=0 items=0 ppid=1 pid=5470 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:37.895000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:56:37.901079 sshd-session[5470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:56:37.917145 systemd-logind[1575]: New session 30 of user core. Jan 27 12:56:37.920195 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 27 12:56:37.927000 audit[5470]: USER_START pid=5470 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:37.931000 audit[5475]: CRED_ACQ pid=5475 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:38.210235 sshd[5475]: Connection closed by 10.0.0.1 port 58402 Jan 27 12:56:38.212246 sshd-session[5470]: pam_unix(sshd:session): session closed for user core Jan 27 12:56:38.213000 audit[5470]: USER_END pid=5470 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:38.214000 audit[5470]: CRED_DISP pid=5470 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:38.226850 systemd[1]: sshd@28-10.0.0.130:22-10.0.0.1:58402.service: Deactivated successfully. Jan 27 12:56:38.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.130:22-10.0.0.1:58402 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:38.232104 systemd[1]: session-30.scope: Deactivated successfully. Jan 27 12:56:38.233851 systemd-logind[1575]: Session 30 logged out. Waiting for processes to exit. Jan 27 12:56:38.244404 systemd[1]: Started sshd@29-10.0.0.130:22-10.0.0.1:58408.service - OpenSSH per-connection server daemon (10.0.0.1:58408). Jan 27 12:56:38.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.130:22-10.0.0.1:58408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:38.249331 systemd-logind[1575]: Removed session 30. Jan 27 12:56:38.324000 audit[5486]: USER_ACCT pid=5486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:38.327553 sshd[5486]: Accepted publickey for core from 10.0.0.1 port 58408 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:56:38.327000 audit[5486]: CRED_ACQ pid=5486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:38.327000 audit[5486]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd97d5f0b0 a2=3 a3=0 items=0 ppid=1 pid=5486 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:38.327000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:56:38.330620 sshd-session[5486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:56:38.339171 systemd-logind[1575]: New session 31 of user core. Jan 27 12:56:38.354195 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 27 12:56:38.356000 audit[5486]: USER_START pid=5486 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:38.360000 audit[5490]: CRED_ACQ pid=5490 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:38.537492 sshd[5490]: Connection closed by 10.0.0.1 port 58408 Jan 27 12:56:38.538404 sshd-session[5486]: pam_unix(sshd:session): session closed for user core Jan 27 12:56:38.551000 audit[5486]: USER_END pid=5486 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:38.552000 audit[5486]: CRED_DISP pid=5486 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:38.558188 systemd[1]: sshd@29-10.0.0.130:22-10.0.0.1:58408.service: Deactivated successfully. Jan 27 12:56:38.558272 systemd-logind[1575]: Session 31 logged out. Waiting for processes to exit. Jan 27 12:56:38.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.130:22-10.0.0.1:58408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:38.562153 systemd[1]: session-31.scope: Deactivated successfully. Jan 27 12:56:38.567480 systemd-logind[1575]: Removed session 31. Jan 27 12:56:38.802000 audit[5505]: NETFILTER_CFG table=filter:142 family=2 entries=38 op=nft_register_rule pid=5505 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:56:38.802000 audit[5505]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffce03d2a00 a2=0 a3=7ffce03d29ec items=0 ppid=2925 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:38.802000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:56:38.809000 audit[5505]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5505 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:56:38.809000 audit[5505]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffce03d2a00 a2=0 a3=0 items=0 ppid=2925 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:38.809000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:56:42.995009 kubelet[2768]: E0127 12:56:42.993622 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d95ff6778-flxqp" podUID="518046d9-b7bc-493b-96b2-44b9979317ed" Jan 27 12:56:43.555157 systemd[1]: Started sshd@30-10.0.0.130:22-10.0.0.1:55250.service - OpenSSH per-connection server daemon (10.0.0.1:55250). Jan 27 12:56:43.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.130:22-10.0.0.1:55250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:43.557962 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 27 12:56:43.558047 kernel: audit: type=1130 audit(1769518603.553:945): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.130:22-10.0.0.1:55250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:43.680000 audit[5538]: USER_ACCT pid=5538 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:43.696031 kernel: audit: type=1101 audit(1769518603.680:946): pid=5538 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:43.696108 sshd[5538]: Accepted publickey for core from 10.0.0.1 port 55250 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:56:43.700759 sshd-session[5538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:56:43.711948 kernel: audit: type=1103 audit(1769518603.696:947): pid=5538 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:43.696000 audit[5538]: CRED_ACQ pid=5538 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:43.732022 kernel: audit: type=1006 audit(1769518603.696:948): pid=5538 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Jan 27 12:56:43.732123 kernel: audit: type=1300 audit(1769518603.696:948): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2bee5cd0 a2=3 a3=0 items=0 ppid=1 pid=5538 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:43.696000 audit[5538]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2bee5cd0 a2=3 a3=0 items=0 ppid=1 pid=5538 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:43.736433 kernel: audit: type=1327 audit(1769518603.696:948): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:56:43.696000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:56:43.745190 systemd-logind[1575]: New session 32 of user core. Jan 27 12:56:43.748116 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 27 12:56:43.753000 audit[5538]: USER_START pid=5538 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:43.772988 kernel: audit: type=1105 audit(1769518603.753:949): pid=5538 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:43.772000 audit[5543]: CRED_ACQ pid=5543 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:43.785069 kernel: audit: type=1103 audit(1769518603.772:950): pid=5543 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:43.899753 sshd[5543]: Connection closed by 10.0.0.1 port 55250 Jan 27 12:56:43.900160 sshd-session[5538]: pam_unix(sshd:session): session closed for user core Jan 27 12:56:43.900000 audit[5538]: USER_END pid=5538 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:43.907250 systemd[1]: sshd@30-10.0.0.130:22-10.0.0.1:55250.service: Deactivated successfully. Jan 27 12:56:43.910523 systemd[1]: session-32.scope: Deactivated successfully. Jan 27 12:56:43.913360 systemd-logind[1575]: Session 32 logged out. Waiting for processes to exit. Jan 27 12:56:43.915890 systemd-logind[1575]: Removed session 32. Jan 27 12:56:43.901000 audit[5538]: CRED_DISP pid=5538 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:43.926272 kernel: audit: type=1106 audit(1769518603.900:951): pid=5538 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:43.926339 kernel: audit: type=1104 audit(1769518603.901:952): pid=5538 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:43.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.130:22-10.0.0.1:55250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:43.987722 kubelet[2768]: E0127 12:56:43.987597 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mtm9p" podUID="13a845a0-aaa5-4e80-8a2f-691163970ae8" Jan 27 12:56:43.988223 kubelet[2768]: E0127 12:56:43.988170 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-cgdx9" podUID="e6d3c258-6f1e-4868-8f36-862014b4b2fc" Jan 27 12:56:44.988322 kubelet[2768]: E0127 12:56:44.988228 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-8w89r" podUID="32d8681f-2b1f-4fad-bc6d-7656e61dae7d" Jan 27 12:56:45.992883 kubelet[2768]: E0127 12:56:45.992825 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5vwvj" podUID="6af69036-827e-49bb-8e7c-3940b856830f" Jan 27 12:56:48.564000 audit[5556]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5556 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:56:48.568693 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 12:56:48.568810 kernel: audit: type=1325 audit(1769518608.564:954): table=filter:144 family=2 entries=26 op=nft_register_rule pid=5556 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:56:48.564000 audit[5556]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd81a32700 a2=0 a3=7ffd81a326ec items=0 ppid=2925 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:48.591011 kernel: audit: type=1300 audit(1769518608.564:954): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd81a32700 a2=0 a3=7ffd81a326ec items=0 ppid=2925 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:48.591096 kernel: audit: type=1327 audit(1769518608.564:954): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:56:48.564000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:56:48.582000 audit[5556]: NETFILTER_CFG table=nat:145 family=2 entries=104 op=nft_register_chain pid=5556 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:56:48.607815 kernel: audit: type=1325 audit(1769518608.582:955): table=nat:145 family=2 entries=104 op=nft_register_chain pid=5556 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 12:56:48.607960 kernel: audit: type=1300 audit(1769518608.582:955): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd81a32700 a2=0 a3=7ffd81a326ec items=0 ppid=2925 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:48.582000 audit[5556]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd81a32700 a2=0 a3=7ffd81a326ec items=0 ppid=2925 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:48.582000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:56:48.630230 kernel: audit: type=1327 audit(1769518608.582:955): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 12:56:48.915157 systemd[1]: Started sshd@31-10.0.0.130:22-10.0.0.1:55252.service - OpenSSH per-connection server daemon (10.0.0.1:55252). Jan 27 12:56:48.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.130:22-10.0.0.1:55252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:48.923999 kernel: audit: type=1130 audit(1769518608.914:956): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.130:22-10.0.0.1:55252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:48.988711 kubelet[2768]: E0127 12:56:48.988560 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9dc77d7c4-lxzpr" podUID="7b273d03-9e4a-4d5f-a56c-d5eb5ded9cac" Jan 27 12:56:48.997000 audit[5558]: USER_ACCT pid=5558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:49.000201 sshd[5558]: Accepted publickey for core from 10.0.0.1 port 55252 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:56:49.003488 sshd-session[5558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:56:49.000000 audit[5558]: CRED_ACQ pid=5558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:49.011004 systemd-logind[1575]: New session 33 of user core. Jan 27 12:56:49.023142 kernel: audit: type=1101 audit(1769518608.997:957): pid=5558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:49.023204 kernel: audit: type=1103 audit(1769518609.000:958): pid=5558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:49.031314 kernel: audit: type=1006 audit(1769518609.000:959): pid=5558 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=33 res=1 Jan 27 12:56:49.000000 audit[5558]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd494b6710 a2=3 a3=0 items=0 ppid=1 pid=5558 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:49.000000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:56:49.034221 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 27 12:56:49.038000 audit[5558]: USER_START pid=5558 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:49.041000 audit[5562]: CRED_ACQ pid=5562 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:49.182265 sshd[5562]: Connection closed by 10.0.0.1 port 55252 Jan 27 12:56:49.183176 sshd-session[5558]: pam_unix(sshd:session): session closed for user core Jan 27 12:56:49.183000 audit[5558]: USER_END pid=5558 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:49.183000 audit[5558]: CRED_DISP pid=5558 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:49.189536 systemd-logind[1575]: Session 33 logged out. Waiting for processes to exit. Jan 27 12:56:49.190145 systemd[1]: sshd@31-10.0.0.130:22-10.0.0.1:55252.service: Deactivated successfully. Jan 27 12:56:49.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.130:22-10.0.0.1:55252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:49.195719 systemd[1]: session-33.scope: Deactivated successfully. Jan 27 12:56:49.199864 systemd-logind[1575]: Removed session 33. Jan 27 12:56:52.988205 kubelet[2768]: E0127 12:56:52.988097 2768 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 27 12:56:54.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.130:22-10.0.0.1:45932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:54.207247 systemd[1]: Started sshd@32-10.0.0.130:22-10.0.0.1:45932.service - OpenSSH per-connection server daemon (10.0.0.1:45932). Jan 27 12:56:54.209603 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 27 12:56:54.209802 kernel: audit: type=1130 audit(1769518614.205:965): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.130:22-10.0.0.1:45932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:54.276000 audit[5581]: USER_ACCT pid=5581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:54.279159 sshd[5581]: Accepted publickey for core from 10.0.0.1 port 45932 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:56:54.281525 sshd-session[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:56:54.293967 kernel: audit: type=1101 audit(1769518614.276:966): pid=5581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:54.294028 kernel: audit: type=1103 audit(1769518614.278:967): pid=5581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:54.278000 audit[5581]: CRED_ACQ pid=5581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:54.292423 systemd-logind[1575]: New session 34 of user core. Jan 27 12:56:54.307231 kernel: audit: type=1006 audit(1769518614.278:968): pid=5581 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=34 res=1 Jan 27 12:56:54.307330 kernel: audit: type=1300 audit(1769518614.278:968): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3e1a9500 a2=3 a3=0 items=0 ppid=1 pid=5581 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:54.278000 audit[5581]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3e1a9500 a2=3 a3=0 items=0 ppid=1 pid=5581 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:54.278000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:56:54.322529 kernel: audit: type=1327 audit(1769518614.278:968): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:56:54.328166 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 27 12:56:54.331000 audit[5581]: USER_START pid=5581 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:54.335000 audit[5585]: CRED_ACQ pid=5585 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:54.359387 kernel: audit: type=1105 audit(1769518614.331:969): pid=5581 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:54.359468 kernel: audit: type=1103 audit(1769518614.335:970): pid=5585 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:54.468990 sshd[5585]: Connection closed by 10.0.0.1 port 45932 Jan 27 12:56:54.472215 sshd-session[5581]: pam_unix(sshd:session): session closed for user core Jan 27 12:56:54.472000 audit[5581]: USER_END pid=5581 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:54.478614 systemd-logind[1575]: Session 34 logged out. Waiting for processes to exit. Jan 27 12:56:54.479267 systemd[1]: sshd@32-10.0.0.130:22-10.0.0.1:45932.service: Deactivated successfully. Jan 27 12:56:54.482778 systemd[1]: session-34.scope: Deactivated successfully. Jan 27 12:56:54.485745 systemd-logind[1575]: Removed session 34. Jan 27 12:56:54.472000 audit[5581]: CRED_DISP pid=5581 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:54.495261 kernel: audit: type=1106 audit(1769518614.472:971): pid=5581 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:54.495335 kernel: audit: type=1104 audit(1769518614.472:972): pid=5581 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:54.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.130:22-10.0.0.1:45932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:54.988172 kubelet[2768]: E0127 12:56:54.988075 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d95ff6778-flxqp" podUID="518046d9-b7bc-493b-96b2-44b9979317ed" Jan 27 12:56:57.989474 kubelet[2768]: E0127 12:56:57.989344 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-8w89r" podUID="32d8681f-2b1f-4fad-bc6d-7656e61dae7d" Jan 27 12:56:57.990846 kubelet[2768]: E0127 12:56:57.990286 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df48b7979-cgdx9" podUID="e6d3c258-6f1e-4868-8f36-862014b4b2fc" Jan 27 12:56:57.990846 kubelet[2768]: E0127 12:56:57.990639 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5vwvj" podUID="6af69036-827e-49bb-8e7c-3940b856830f" Jan 27 12:56:58.989542 kubelet[2768]: E0127 12:56:58.988964 2768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mtm9p" podUID="13a845a0-aaa5-4e80-8a2f-691163970ae8" Jan 27 12:56:59.487815 systemd[1]: Started sshd@33-10.0.0.130:22-10.0.0.1:45946.service - OpenSSH per-connection server daemon (10.0.0.1:45946). Jan 27 12:56:59.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.130:22-10.0.0.1:45946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:59.501013 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 12:56:59.501121 kernel: audit: type=1130 audit(1769518619.487:974): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.130:22-10.0.0.1:45946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 12:56:59.575000 audit[5601]: USER_ACCT pid=5601 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:59.576632 sshd[5601]: Accepted publickey for core from 10.0.0.1 port 45946 ssh2: RSA SHA256:CAATSPlsgm9CsI670Ly+kU72ggOX04U69roac9intlY Jan 27 12:56:59.579999 sshd-session[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 12:56:59.577000 audit[5601]: CRED_ACQ pid=5601 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:59.593607 systemd-logind[1575]: New session 35 of user core. Jan 27 12:56:59.600555 kernel: audit: type=1101 audit(1769518619.575:975): pid=5601 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:59.600619 kernel: audit: type=1103 audit(1769518619.577:976): pid=5601 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:59.608125 kernel: audit: type=1006 audit(1769518619.577:977): pid=5601 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=35 res=1 Jan 27 12:56:59.577000 audit[5601]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd54048fd0 a2=3 a3=0 items=0 ppid=1 pid=5601 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:59.621081 kernel: audit: type=1300 audit(1769518619.577:977): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd54048fd0 a2=3 a3=0 items=0 ppid=1 pid=5601 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 12:56:59.621184 kernel: audit: type=1327 audit(1769518619.577:977): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:56:59.577000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 12:56:59.626362 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 27 12:56:59.630000 audit[5601]: USER_START pid=5601 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:59.630000 audit[5605]: CRED_ACQ pid=5605 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:59.660430 kernel: audit: type=1105 audit(1769518619.630:978): pid=5601 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:59.660512 kernel: audit: type=1103 audit(1769518619.630:979): pid=5605 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:59.738885 sshd[5605]: Connection closed by 10.0.0.1 port 45946 Jan 27 12:56:59.747543 sshd-session[5601]: pam_unix(sshd:session): session closed for user core Jan 27 12:56:59.749000 audit[5601]: USER_END pid=5601 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:59.755279 systemd[1]: sshd@33-10.0.0.130:22-10.0.0.1:45946.service: Deactivated successfully. Jan 27 12:56:59.758088 systemd[1]: session-35.scope: Deactivated successfully. Jan 27 12:56:59.759965 systemd-logind[1575]: Session 35 logged out. Waiting for processes to exit. Jan 27 12:56:59.761715 systemd-logind[1575]: Removed session 35. Jan 27 12:56:59.749000 audit[5601]: CRED_DISP pid=5601 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:59.777218 kernel: audit: type=1106 audit(1769518619.749:980): pid=5601 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:59.777305 kernel: audit: type=1104 audit(1769518619.749:981): pid=5601 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 27 12:56:59.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.130:22-10.0.0.1:45946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'