Jan 26 18:26:39.817905 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 26 15:51:16 -00 2026 Jan 26 18:26:39.817934 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7ccefddddc0421093e33229b6998deb24cdb3e69dcc9847e30d159fa75e66e9c Jan 26 18:26:39.817946 kernel: BIOS-provided physical RAM map: Jan 26 18:26:39.817961 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 26 18:26:39.817971 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 26 18:26:39.817981 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 26 18:26:39.817991 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 26 18:26:39.818000 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 26 18:26:39.818008 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 26 18:26:39.818017 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 26 18:26:39.818026 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 26 18:26:39.818041 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 26 18:26:39.818051 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 26 18:26:39.818062 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 26 18:26:39.818073 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 26 18:26:39.818082 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 26 18:26:39.818094 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 26 18:26:39.818104 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 26 18:26:39.818116 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 26 18:26:39.818126 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 26 18:26:39.818138 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 26 18:26:39.818147 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 26 18:26:39.818156 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 26 18:26:39.818165 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 26 18:26:39.818174 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 26 18:26:39.818185 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 26 18:26:39.818200 kernel: NX (Execute Disable) protection: active Jan 26 18:26:39.818212 kernel: APIC: Static calls initialized Jan 26 18:26:39.818221 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 26 18:26:39.818230 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 26 18:26:39.818239 kernel: extended physical RAM map: Jan 26 18:26:39.818248 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 26 18:26:39.818260 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 26 18:26:39.818271 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 26 18:26:39.818281 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 26 18:26:39.818292 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 26 18:26:39.818302 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 26 18:26:39.818314 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 26 18:26:39.818323 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 26 18:26:39.818335 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 26 18:26:39.818351 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 26 18:26:39.818367 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 26 18:26:39.818377 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 26 18:26:39.818386 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 26 18:26:39.818396 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 26 18:26:39.818871 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 26 18:26:39.819081 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 26 18:26:39.819094 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 26 18:26:39.819104 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 26 18:26:39.819117 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 26 18:26:39.819133 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 26 18:26:39.819146 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 26 18:26:39.819156 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 26 18:26:39.819165 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 26 18:26:39.819175 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 26 18:26:39.819184 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 26 18:26:39.819196 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 26 18:26:39.819208 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 26 18:26:39.819219 kernel: efi: EFI v2.7 by EDK II Jan 26 18:26:39.819231 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 26 18:26:39.819240 kernel: random: crng init done Jan 26 18:26:39.819254 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 26 18:26:39.819264 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 26 18:26:39.819276 kernel: secureboot: Secure boot disabled Jan 26 18:26:39.819288 kernel: SMBIOS 2.8 present. Jan 26 18:26:39.819300 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 26 18:26:39.819310 kernel: DMI: Memory slots populated: 1/1 Jan 26 18:26:39.819320 kernel: Hypervisor detected: KVM Jan 26 18:26:39.819329 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 26 18:26:39.819338 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 26 18:26:39.819350 kernel: kvm-clock: using sched offset of 10863797678 cycles Jan 26 18:26:39.819363 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 26 18:26:39.819379 kernel: tsc: Detected 2445.426 MHz processor Jan 26 18:26:39.819390 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 26 18:26:39.819399 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 26 18:26:39.819557 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 26 18:26:39.819569 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 26 18:26:39.819579 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 26 18:26:39.819591 kernel: Using GB pages for direct mapping Jan 26 18:26:39.819608 kernel: ACPI: Early table checksum verification disabled Jan 26 18:26:39.819620 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 26 18:26:39.819716 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 26 18:26:39.819728 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 26 18:26:39.819741 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 26 18:26:39.819751 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 26 18:26:39.819761 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 26 18:26:39.819775 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 26 18:26:39.819785 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 26 18:26:39.819798 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 26 18:26:39.819810 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 26 18:26:39.819823 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 26 18:26:39.819833 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 26 18:26:39.819843 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 26 18:26:39.819856 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 26 18:26:39.819867 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 26 18:26:39.819879 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 26 18:26:39.819891 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 26 18:26:39.819902 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 26 18:26:39.819912 kernel: No NUMA configuration found Jan 26 18:26:39.819922 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 26 18:26:39.819932 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 26 18:26:39.819949 kernel: Zone ranges: Jan 26 18:26:39.819961 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 26 18:26:39.819974 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 26 18:26:39.819984 kernel: Normal empty Jan 26 18:26:39.819993 kernel: Device empty Jan 26 18:26:39.820004 kernel: Movable zone start for each node Jan 26 18:26:39.820013 kernel: Early memory node ranges Jan 26 18:26:39.820102 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 26 18:26:39.820115 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 26 18:26:39.820128 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 26 18:26:39.820139 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 26 18:26:39.820149 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 26 18:26:39.820159 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 26 18:26:39.820169 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 26 18:26:39.820179 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 26 18:26:39.820196 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 26 18:26:39.820208 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 26 18:26:39.820229 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 26 18:26:39.820242 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 26 18:26:39.820253 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 26 18:26:39.820266 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 26 18:26:39.820278 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 26 18:26:39.820291 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 26 18:26:39.820302 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 26 18:26:39.820316 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 26 18:26:39.820326 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 26 18:26:39.820338 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 26 18:26:39.820351 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 26 18:26:39.820365 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 26 18:26:39.820375 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 26 18:26:39.820386 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 26 18:26:39.820398 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 26 18:26:39.820553 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 26 18:26:39.820568 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 26 18:26:39.820581 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 26 18:26:39.820593 kernel: TSC deadline timer available Jan 26 18:26:39.820608 kernel: CPU topo: Max. logical packages: 1 Jan 26 18:26:39.820619 kernel: CPU topo: Max. logical dies: 1 Jan 26 18:26:39.820716 kernel: CPU topo: Max. dies per package: 1 Jan 26 18:26:39.820728 kernel: CPU topo: Max. threads per core: 1 Jan 26 18:26:39.820742 kernel: CPU topo: Num. cores per package: 4 Jan 26 18:26:39.820752 kernel: CPU topo: Num. threads per package: 4 Jan 26 18:26:39.820762 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 26 18:26:39.820772 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 26 18:26:39.820786 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 26 18:26:39.820799 kernel: kvm-guest: setup PV sched yield Jan 26 18:26:39.820812 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 26 18:26:39.820825 kernel: Booting paravirtualized kernel on KVM Jan 26 18:26:39.820836 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 26 18:26:39.820846 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 26 18:26:39.820857 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 26 18:26:39.820872 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 26 18:26:39.820885 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 26 18:26:39.820899 kernel: kvm-guest: PV spinlocks enabled Jan 26 18:26:39.820909 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 26 18:26:39.820921 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7ccefddddc0421093e33229b6998deb24cdb3e69dcc9847e30d159fa75e66e9c Jan 26 18:26:39.820932 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 26 18:26:39.820948 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 26 18:26:39.820960 kernel: Fallback order for Node 0: 0 Jan 26 18:26:39.820973 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 26 18:26:39.820984 kernel: Policy zone: DMA32 Jan 26 18:26:39.820995 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 26 18:26:39.821005 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 26 18:26:39.821015 kernel: ftrace: allocating 40128 entries in 157 pages Jan 26 18:26:39.821032 kernel: ftrace: allocated 157 pages with 5 groups Jan 26 18:26:39.821045 kernel: Dynamic Preempt: voluntary Jan 26 18:26:39.821057 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 26 18:26:39.821068 kernel: rcu: RCU event tracing is enabled. Jan 26 18:26:39.821079 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 26 18:26:39.821089 kernel: Trampoline variant of Tasks RCU enabled. Jan 26 18:26:39.821103 kernel: Rude variant of Tasks RCU enabled. Jan 26 18:26:39.821115 kernel: Tracing variant of Tasks RCU enabled. Jan 26 18:26:39.821131 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 26 18:26:39.821141 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 26 18:26:39.821152 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 26 18:26:39.821162 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 26 18:26:39.821176 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 26 18:26:39.821188 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 26 18:26:39.821202 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 26 18:26:39.821216 kernel: Console: colour dummy device 80x25 Jan 26 18:26:39.821227 kernel: printk: legacy console [ttyS0] enabled Jan 26 18:26:39.821237 kernel: ACPI: Core revision 20240827 Jan 26 18:26:39.821250 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 26 18:26:39.821263 kernel: APIC: Switch to symmetric I/O mode setup Jan 26 18:26:39.821276 kernel: x2apic enabled Jan 26 18:26:39.821287 kernel: APIC: Switched APIC routing to: physical x2apic Jan 26 18:26:39.821297 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 26 18:26:39.821311 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 26 18:26:39.821325 kernel: kvm-guest: setup PV IPIs Jan 26 18:26:39.821337 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 26 18:26:39.821351 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 26 18:26:39.821361 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 26 18:26:39.821372 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 26 18:26:39.821382 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 26 18:26:39.821399 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 26 18:26:39.821553 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 26 18:26:39.821568 kernel: Spectre V2 : Mitigation: Retpolines Jan 26 18:26:39.821581 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 26 18:26:39.821592 kernel: Speculative Store Bypass: Vulnerable Jan 26 18:26:39.821603 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 26 18:26:39.821619 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 26 18:26:39.821712 kernel: active return thunk: srso_alias_return_thunk Jan 26 18:26:39.821725 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 26 18:26:39.821738 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 26 18:26:39.821749 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 26 18:26:39.821760 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 26 18:26:39.821770 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 26 18:26:39.821785 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 26 18:26:39.821799 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 26 18:26:39.821810 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 26 18:26:39.821821 kernel: Freeing SMP alternatives memory: 32K Jan 26 18:26:39.821831 kernel: pid_max: default: 32768 minimum: 301 Jan 26 18:26:39.821842 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 26 18:26:39.821855 kernel: landlock: Up and running. Jan 26 18:26:39.821871 kernel: SELinux: Initializing. Jan 26 18:26:39.821884 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 26 18:26:39.821894 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 26 18:26:39.821905 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 26 18:26:39.821915 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 26 18:26:39.821928 kernel: signal: max sigframe size: 1776 Jan 26 18:26:39.821941 kernel: rcu: Hierarchical SRCU implementation. Jan 26 18:26:39.821958 kernel: rcu: Max phase no-delay instances is 400. Jan 26 18:26:39.821969 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 26 18:26:39.821979 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 26 18:26:39.821989 kernel: smp: Bringing up secondary CPUs ... Jan 26 18:26:39.822001 kernel: smpboot: x86: Booting SMP configuration: Jan 26 18:26:39.822014 kernel: .... node #0, CPUs: #1 #2 #3 Jan 26 18:26:39.822027 kernel: smp: Brought up 1 node, 4 CPUs Jan 26 18:26:39.822038 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 26 18:26:39.822053 kernel: Memory: 2439048K/2565800K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 120812K reserved, 0K cma-reserved) Jan 26 18:26:39.822064 kernel: devtmpfs: initialized Jan 26 18:26:39.822076 kernel: x86/mm: Memory block size: 128MB Jan 26 18:26:39.822089 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 26 18:26:39.822102 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 26 18:26:39.822113 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 26 18:26:39.822127 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 26 18:26:39.822142 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 26 18:26:39.822156 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 26 18:26:39.822169 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 26 18:26:39.822183 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 26 18:26:39.822194 kernel: pinctrl core: initialized pinctrl subsystem Jan 26 18:26:39.822204 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 26 18:26:39.822218 kernel: audit: initializing netlink subsys (disabled) Jan 26 18:26:39.822230 kernel: audit: type=2000 audit(1769451989.139:1): state=initialized audit_enabled=0 res=1 Jan 26 18:26:39.822242 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 26 18:26:39.822255 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 26 18:26:39.822267 kernel: cpuidle: using governor menu Jan 26 18:26:39.822277 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 26 18:26:39.822288 kernel: dca service started, version 1.12.1 Jan 26 18:26:39.822298 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 26 18:26:39.822316 kernel: PCI: Using configuration type 1 for base access Jan 26 18:26:39.822328 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 26 18:26:39.822341 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 26 18:26:39.822352 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 26 18:26:39.822362 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 26 18:26:39.822373 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 26 18:26:39.822385 kernel: ACPI: Added _OSI(Module Device) Jan 26 18:26:39.822401 kernel: ACPI: Added _OSI(Processor Device) Jan 26 18:26:39.822549 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 26 18:26:39.822562 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 26 18:26:39.822575 kernel: ACPI: Interpreter enabled Jan 26 18:26:39.822585 kernel: ACPI: PM: (supports S0 S3 S5) Jan 26 18:26:39.822596 kernel: ACPI: Using IOAPIC for interrupt routing Jan 26 18:26:39.822606 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 26 18:26:39.822701 kernel: PCI: Using E820 reservations for host bridge windows Jan 26 18:26:39.822715 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 26 18:26:39.822728 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 26 18:26:39.823061 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 26 18:26:39.823318 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 26 18:26:39.823793 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 26 18:26:39.823819 kernel: PCI host bridge to bus 0000:00 Jan 26 18:26:39.824060 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 26 18:26:39.824285 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 26 18:26:39.824794 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 26 18:26:39.825016 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 26 18:26:39.825301 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 26 18:26:39.826019 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 26 18:26:39.827028 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 26 18:26:39.827349 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 26 18:26:39.827849 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 26 18:26:39.828036 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 26 18:26:39.828210 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 26 18:26:39.828376 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 26 18:26:39.828804 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 26 18:26:39.828977 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 20507 usecs Jan 26 18:26:39.829155 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 26 18:26:39.829331 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 26 18:26:39.829760 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 26 18:26:39.829939 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 26 18:26:39.830118 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 26 18:26:39.830303 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 26 18:26:39.830723 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 26 18:26:39.830907 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 26 18:26:39.831082 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 26 18:26:39.831249 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 26 18:26:39.831539 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 26 18:26:39.831808 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 26 18:26:39.831978 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 26 18:26:39.832164 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 26 18:26:39.832333 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 26 18:26:39.832711 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 14648 usecs Jan 26 18:26:39.832904 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 26 18:26:39.833072 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 26 18:26:39.833245 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 26 18:26:39.833548 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 26 18:26:39.833881 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 26 18:26:39.833901 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 26 18:26:39.833914 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 26 18:26:39.833928 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 26 18:26:39.833949 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 26 18:26:39.833963 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 26 18:26:39.833977 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 26 18:26:39.833991 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 26 18:26:39.834003 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 26 18:26:39.834011 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 26 18:26:39.834018 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 26 18:26:39.834026 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 26 18:26:39.834036 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 26 18:26:39.834044 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 26 18:26:39.834051 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 26 18:26:39.834059 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 26 18:26:39.834066 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 26 18:26:39.834074 kernel: iommu: Default domain type: Translated Jan 26 18:26:39.834081 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 26 18:26:39.834091 kernel: efivars: Registered efivars operations Jan 26 18:26:39.834098 kernel: PCI: Using ACPI for IRQ routing Jan 26 18:26:39.834106 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 26 18:26:39.834114 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 26 18:26:39.834121 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 26 18:26:39.834128 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 26 18:26:39.834135 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 26 18:26:39.834145 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 26 18:26:39.834152 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 26 18:26:39.834160 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 26 18:26:39.834167 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 26 18:26:39.834349 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 26 18:26:39.834741 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 26 18:26:39.834988 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 26 18:26:39.834999 kernel: vgaarb: loaded Jan 26 18:26:39.835007 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 26 18:26:39.835015 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 26 18:26:39.835023 kernel: clocksource: Switched to clocksource kvm-clock Jan 26 18:26:39.835030 kernel: VFS: Disk quotas dquot_6.6.0 Jan 26 18:26:39.835038 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 26 18:26:39.835045 kernel: pnp: PnP ACPI init Jan 26 18:26:39.835351 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 26 18:26:39.835598 kernel: pnp: PnP ACPI: found 6 devices Jan 26 18:26:39.835608 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 26 18:26:39.835616 kernel: NET: Registered PF_INET protocol family Jan 26 18:26:39.835749 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 26 18:26:39.835759 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 26 18:26:39.835786 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 26 18:26:39.835804 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 26 18:26:39.835818 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 26 18:26:39.835831 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 26 18:26:39.835839 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 26 18:26:39.835846 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 26 18:26:39.835854 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 26 18:26:39.835865 kernel: NET: Registered PF_XDP protocol family Jan 26 18:26:39.836319 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 26 18:26:39.837233 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 26 18:26:39.837926 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 26 18:26:39.839181 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 26 18:26:39.839798 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 26 18:26:39.840371 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 26 18:26:39.840903 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 26 18:26:39.841266 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 26 18:26:39.841327 kernel: PCI: CLS 0 bytes, default 64 Jan 26 18:26:39.841384 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 26 18:26:39.841392 kernel: Initialise system trusted keyrings Jan 26 18:26:39.841729 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 26 18:26:39.841754 kernel: Key type asymmetric registered Jan 26 18:26:39.841767 kernel: Asymmetric key parser 'x509' registered Jan 26 18:26:39.841781 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 26 18:26:39.841799 kernel: io scheduler mq-deadline registered Jan 26 18:26:39.841807 kernel: io scheduler kyber registered Jan 26 18:26:39.841815 kernel: io scheduler bfq registered Jan 26 18:26:39.841823 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 26 18:26:39.841834 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 26 18:26:39.841844 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 26 18:26:39.841852 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 26 18:26:39.841860 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 26 18:26:39.841868 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 26 18:26:39.841878 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 26 18:26:39.841886 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 26 18:26:39.841958 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 26 18:26:39.842943 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 26 18:26:39.842959 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 26 18:26:39.843743 kernel: rtc_cmos 00:04: registered as rtc0 Jan 26 18:26:39.845321 kernel: rtc_cmos 00:04: setting system clock to 2026-01-26T18:26:36 UTC (1769451996) Jan 26 18:26:39.847071 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 26 18:26:39.847183 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 26 18:26:39.847239 kernel: efifb: probing for efifb Jan 26 18:26:39.847294 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 26 18:26:39.847347 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 26 18:26:39.847399 kernel: efifb: scrolling: redraw Jan 26 18:26:39.847724 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 26 18:26:39.847732 kernel: Console: switching to colour frame buffer device 160x50 Jan 26 18:26:39.847789 kernel: fb0: EFI VGA frame buffer device Jan 26 18:26:39.847843 kernel: pstore: Using crash dump compression: deflate Jan 26 18:26:39.847896 kernel: pstore: Registered efi_pstore as persistent store backend Jan 26 18:26:39.847905 kernel: NET: Registered PF_INET6 protocol family Jan 26 18:26:39.847913 kernel: Segment Routing with IPv6 Jan 26 18:26:39.847924 kernel: In-situ OAM (IOAM) with IPv6 Jan 26 18:26:39.847933 kernel: NET: Registered PF_PACKET protocol family Jan 26 18:26:39.847941 kernel: Key type dns_resolver registered Jan 26 18:26:39.847949 kernel: IPI shorthand broadcast: enabled Jan 26 18:26:39.847958 kernel: sched_clock: Marking stable (4728055459, 3268697947)->(9106546570, -1109793164) Jan 26 18:26:39.847966 kernel: registered taskstats version 1 Jan 26 18:26:39.847974 kernel: Loading compiled-in X.509 certificates Jan 26 18:26:39.847983 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 3aafff36862946ad45897da10ba1e85c8fafc8e8' Jan 26 18:26:39.847993 kernel: Demotion targets for Node 0: null Jan 26 18:26:39.848001 kernel: Key type .fscrypt registered Jan 26 18:26:39.848010 kernel: Key type fscrypt-provisioning registered Jan 26 18:26:39.848018 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 26 18:26:39.848026 kernel: ima: Allocated hash algorithm: sha1 Jan 26 18:26:39.848034 kernel: ima: No architecture policies found Jan 26 18:26:39.848044 kernel: clk: Disabling unused clocks Jan 26 18:26:39.848052 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 26 18:26:39.848060 kernel: Write protecting the kernel read-only data: 47104k Jan 26 18:26:39.848069 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 26 18:26:39.848077 kernel: Run /init as init process Jan 26 18:26:39.848085 kernel: with arguments: Jan 26 18:26:39.848093 kernel: /init Jan 26 18:26:39.848102 kernel: with environment: Jan 26 18:26:39.848111 kernel: HOME=/ Jan 26 18:26:39.848119 kernel: TERM=linux Jan 26 18:26:39.848127 kernel: SCSI subsystem initialized Jan 26 18:26:39.848136 kernel: libata version 3.00 loaded. Jan 26 18:26:39.848317 kernel: ahci 0000:00:1f.2: version 3.0 Jan 26 18:26:39.848329 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 26 18:26:39.848749 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 26 18:26:39.848933 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 26 18:26:39.849103 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 26 18:26:39.849294 kernel: scsi host0: ahci Jan 26 18:26:39.849601 kernel: scsi host1: ahci Jan 26 18:26:39.849860 kernel: scsi host2: ahci Jan 26 18:26:39.850048 kernel: scsi host3: ahci Jan 26 18:26:39.850227 kernel: scsi host4: ahci Jan 26 18:26:39.850405 kernel: scsi host5: ahci Jan 26 18:26:39.850532 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Jan 26 18:26:39.850545 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Jan 26 18:26:39.850553 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Jan 26 18:26:39.850563 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Jan 26 18:26:39.850571 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Jan 26 18:26:39.850579 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Jan 26 18:26:39.850587 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 26 18:26:39.850595 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 26 18:26:39.850602 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 26 18:26:39.850610 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 26 18:26:39.850620 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 26 18:26:39.850692 kernel: ata3.00: LPM support broken, forcing max_power Jan 26 18:26:39.850700 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 26 18:26:39.850708 kernel: ata3.00: applying bridge limits Jan 26 18:26:39.850716 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 26 18:26:39.850723 kernel: ata3.00: LPM support broken, forcing max_power Jan 26 18:26:39.850731 kernel: ata3.00: configured for UDMA/100 Jan 26 18:26:39.850939 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 26 18:26:39.851129 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 26 18:26:39.851298 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 26 18:26:39.851740 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 26 18:26:39.851755 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 26 18:26:39.851763 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 26 18:26:39.851775 kernel: GPT:16515071 != 27000831 Jan 26 18:26:39.851783 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 26 18:26:39.851790 kernel: GPT:16515071 != 27000831 Jan 26 18:26:39.851798 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 26 18:26:39.851805 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 26 18:26:39.851995 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 26 18:26:39.852006 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 26 18:26:39.852017 kernel: device-mapper: uevent: version 1.0.3 Jan 26 18:26:39.852025 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 26 18:26:39.852032 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 26 18:26:39.852040 kernel: raid6: avx2x4 gen() 32463 MB/s Jan 26 18:26:39.852048 kernel: raid6: avx2x2 gen() 31618 MB/s Jan 26 18:26:39.852055 kernel: raid6: avx2x1 gen() 24101 MB/s Jan 26 18:26:39.852063 kernel: raid6: using algorithm avx2x4 gen() 32463 MB/s Jan 26 18:26:39.852073 kernel: raid6: .... xor() 4068 MB/s, rmw enabled Jan 26 18:26:39.852081 kernel: raid6: using avx2x2 recovery algorithm Jan 26 18:26:39.852088 kernel: xor: automatically using best checksumming function avx Jan 26 18:26:39.852096 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 26 18:26:39.852104 kernel: BTRFS: device fsid c78f7707-5c76-44ff-97b4-b1f791a94b1d devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (182) Jan 26 18:26:39.852112 kernel: BTRFS info (device dm-0): first mount of filesystem c78f7707-5c76-44ff-97b4-b1f791a94b1d Jan 26 18:26:39.852120 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 26 18:26:39.852131 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 26 18:26:39.852138 kernel: BTRFS info (device dm-0): enabling free space tree Jan 26 18:26:39.852150 kernel: loop: module loaded Jan 26 18:26:39.852158 kernel: loop0: detected capacity change from 0 to 100552 Jan 26 18:26:39.852166 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 26 18:26:39.852175 systemd[1]: Successfully made /usr/ read-only. Jan 26 18:26:39.852185 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 26 18:26:39.852196 systemd[1]: Detected virtualization kvm. Jan 26 18:26:39.852204 systemd[1]: Detected architecture x86-64. Jan 26 18:26:39.852211 systemd[1]: Running in initrd. Jan 26 18:26:39.852219 systemd[1]: No hostname configured, using default hostname. Jan 26 18:26:39.852227 systemd[1]: Hostname set to . Jan 26 18:26:39.852235 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 26 18:26:39.852245 systemd[1]: Queued start job for default target initrd.target. Jan 26 18:26:39.852253 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 26 18:26:39.852261 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 26 18:26:39.852272 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 26 18:26:39.852280 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 26 18:26:39.852289 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 26 18:26:39.852299 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 26 18:26:39.852307 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 26 18:26:39.852315 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 26 18:26:39.852324 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 26 18:26:39.852332 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 26 18:26:39.852340 systemd[1]: Reached target paths.target - Path Units. Jan 26 18:26:39.852350 systemd[1]: Reached target slices.target - Slice Units. Jan 26 18:26:39.852358 systemd[1]: Reached target swap.target - Swaps. Jan 26 18:26:39.852366 systemd[1]: Reached target timers.target - Timer Units. Jan 26 18:26:39.852374 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 26 18:26:39.852382 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 26 18:26:39.852390 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 26 18:26:39.852398 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 26 18:26:39.852531 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 26 18:26:39.852540 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 26 18:26:39.852548 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 26 18:26:39.852556 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 26 18:26:39.852564 systemd[1]: Reached target sockets.target - Socket Units. Jan 26 18:26:39.852573 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 26 18:26:39.852584 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 26 18:26:39.852592 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 26 18:26:39.852600 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 26 18:26:39.852608 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 26 18:26:39.852616 systemd[1]: Starting systemd-fsck-usr.service... Jan 26 18:26:39.852686 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 26 18:26:39.852695 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 26 18:26:39.852706 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 26 18:26:39.852715 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 26 18:26:39.852723 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 26 18:26:39.852731 systemd[1]: Finished systemd-fsck-usr.service. Jan 26 18:26:39.852766 systemd-journald[320]: Collecting audit messages is enabled. Jan 26 18:26:39.852788 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 26 18:26:39.852796 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 26 18:26:39.852807 systemd-journald[320]: Journal started Jan 26 18:26:39.852823 systemd-journald[320]: Runtime Journal (/run/log/journal/fa9db811465b4404a6a39b163932c8fb) is 6M, max 48M, 42M free. Jan 26 18:26:39.889764 systemd[1]: Started systemd-journald.service - Journal Service. Jan 26 18:26:39.889795 kernel: Bridge firewalling registered Jan 26 18:26:39.889808 kernel: audit: type=1130 audit(1769451999.867:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:39.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:39.893750 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 26 18:26:39.904284 systemd-modules-load[321]: Inserted module 'br_netfilter' Jan 26 18:26:39.912898 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 26 18:26:39.949926 kernel: audit: type=1130 audit(1769451999.926:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:39.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:39.956342 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 26 18:26:39.995975 kernel: audit: type=1130 audit(1769451999.965:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:39.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:39.972717 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 26 18:26:39.974899 systemd-tmpfiles[332]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 26 18:26:40.029328 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 26 18:26:40.043766 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 26 18:26:40.146222 kernel: audit: type=1130 audit(1769452000.043:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:40.146267 kernel: audit: type=1130 audit(1769452000.074:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:40.146289 kernel: audit: type=1130 audit(1769452000.118:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:40.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:40.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:40.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:40.074959 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 26 18:26:40.098142 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 26 18:26:40.109262 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 26 18:26:40.153765 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 26 18:26:40.185333 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 26 18:26:40.228161 kernel: audit: type=1130 audit(1769452000.185:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:40.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:40.251303 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 26 18:26:40.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:40.272248 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 26 18:26:40.296392 kernel: audit: type=1130 audit(1769452000.267:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:40.296581 kernel: audit: type=1334 audit(1769452000.268:10): prog-id=6 op=LOAD Jan 26 18:26:40.268000 audit: BPF prog-id=6 op=LOAD Jan 26 18:26:40.296711 dracut-cmdline[352]: dracut-109 Jan 26 18:26:40.296711 dracut-cmdline[352]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7ccefddddc0421093e33229b6998deb24cdb3e69dcc9847e30d159fa75e66e9c Jan 26 18:26:40.448897 systemd-resolved[359]: Positive Trust Anchors: Jan 26 18:26:40.448923 systemd-resolved[359]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 26 18:26:40.448929 systemd-resolved[359]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 26 18:26:40.448975 systemd-resolved[359]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 26 18:26:40.572891 systemd-resolved[359]: Defaulting to hostname 'linux'. Jan 26 18:26:40.576028 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 26 18:26:40.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:40.603305 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 26 18:26:40.642253 kernel: audit: type=1130 audit(1769452000.602:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:40.835867 kernel: Loading iSCSI transport class v2.0-870. Jan 26 18:26:40.866793 kernel: iscsi: registered transport (tcp) Jan 26 18:26:40.901757 kernel: iscsi: registered transport (qla4xxx) Jan 26 18:26:40.901806 kernel: QLogic iSCSI HBA Driver Jan 26 18:26:40.961205 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 26 18:26:41.015069 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 26 18:26:41.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:41.018322 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 26 18:26:41.132998 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 26 18:26:41.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:41.135336 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 26 18:26:41.163018 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 26 18:26:41.235912 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 26 18:26:41.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:41.244000 audit: BPF prog-id=7 op=LOAD Jan 26 18:26:41.245000 audit: BPF prog-id=8 op=LOAD Jan 26 18:26:41.249093 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 26 18:26:41.309870 systemd-udevd[586]: Using default interface naming scheme 'v257'. Jan 26 18:26:41.338959 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 26 18:26:41.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:41.359261 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 26 18:26:41.417620 dracut-pre-trigger[635]: rd.md=0: removing MD RAID activation Jan 26 18:26:41.492186 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 26 18:26:41.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:41.494000 audit: BPF prog-id=9 op=LOAD Jan 26 18:26:41.496317 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 26 18:26:41.544613 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 26 18:26:41.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:41.567948 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 26 18:26:41.640991 systemd-networkd[721]: lo: Link UP Jan 26 18:26:41.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:41.641064 systemd-networkd[721]: lo: Gained carrier Jan 26 18:26:41.642582 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 26 18:26:41.646332 systemd[1]: Reached target network.target - Network. Jan 26 18:26:41.732325 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 26 18:26:41.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:41.747272 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 26 18:26:41.848176 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 26 18:26:41.890366 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 26 18:26:41.952995 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 26 18:26:41.974178 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 26 18:26:42.000178 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 26 18:26:42.018966 kernel: cryptd: max_cpu_qlen set to 1000 Jan 26 18:26:42.000396 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 26 18:26:42.000569 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 26 18:26:42.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:42.037996 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 26 18:26:42.058230 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 26 18:26:42.122810 kernel: AES CTR mode by8 optimization enabled Jan 26 18:26:42.148885 disk-uuid[784]: Primary Header is updated. Jan 26 18:26:42.148885 disk-uuid[784]: Secondary Entries is updated. Jan 26 18:26:42.148885 disk-uuid[784]: Secondary Header is updated. Jan 26 18:26:42.183748 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 26 18:26:42.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:42.192917 systemd-networkd[721]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 26 18:26:42.192922 systemd-networkd[721]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 26 18:26:42.196562 systemd-networkd[721]: eth0: Link UP Jan 26 18:26:42.196918 systemd-networkd[721]: eth0: Gained carrier Jan 26 18:26:42.196929 systemd-networkd[721]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 26 18:26:42.288181 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 26 18:26:42.290959 systemd-networkd[721]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 26 18:26:42.384614 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 26 18:26:42.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:42.401205 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 26 18:26:42.405385 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 26 18:26:42.407601 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 26 18:26:42.434339 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 26 18:26:42.502081 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 26 18:26:42.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:43.259072 disk-uuid[813]: Warning: The kernel is still using the old partition table. Jan 26 18:26:43.259072 disk-uuid[813]: The new table will be used at the next reboot or after you Jan 26 18:26:43.259072 disk-uuid[813]: run partprobe(8) or kpartx(8) Jan 26 18:26:43.259072 disk-uuid[813]: The operation has completed successfully. Jan 26 18:26:43.295905 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 26 18:26:43.296168 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 26 18:26:43.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:43.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:43.317933 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 26 18:26:43.411832 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (865) Jan 26 18:26:43.430377 kernel: BTRFS info (device vda6): first mount of filesystem 74befb1c-e259-44ed-b7ad-b49e24b3bbbe Jan 26 18:26:43.430594 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 26 18:26:43.460382 kernel: BTRFS info (device vda6): turning on async discard Jan 26 18:26:43.460866 kernel: BTRFS info (device vda6): enabling free space tree Jan 26 18:26:43.487867 kernel: BTRFS info (device vda6): last unmount of filesystem 74befb1c-e259-44ed-b7ad-b49e24b3bbbe Jan 26 18:26:43.494283 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 26 18:26:43.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:43.499780 systemd-networkd[721]: eth0: Gained IPv6LL Jan 26 18:26:43.504391 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 26 18:26:43.735800 ignition[884]: Ignition 2.24.0 Jan 26 18:26:43.735876 ignition[884]: Stage: fetch-offline Jan 26 18:26:43.735916 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jan 26 18:26:43.735928 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 26 18:26:43.736018 ignition[884]: parsed url from cmdline: "" Jan 26 18:26:43.736022 ignition[884]: no config URL provided Jan 26 18:26:43.736100 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Jan 26 18:26:43.736111 ignition[884]: no config at "/usr/lib/ignition/user.ign" Jan 26 18:26:43.736157 ignition[884]: op(1): [started] loading QEMU firmware config module Jan 26 18:26:43.736162 ignition[884]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 26 18:26:43.757371 ignition[884]: op(1): [finished] loading QEMU firmware config module Jan 26 18:26:44.457404 ignition[884]: parsing config with SHA512: 0570ee2d2fb46f311825ad2f88bb31ee44eb3aecee7451f139f137972091acf35235c3a7d67845b21fd463edefced1fc12061c7304f7efd6b4920fb245dd9d6b Jan 26 18:26:44.482335 unknown[884]: fetched base config from "system" Jan 26 18:26:44.482357 unknown[884]: fetched user config from "qemu" Jan 26 18:26:44.485574 ignition[884]: fetch-offline: fetch-offline passed Jan 26 18:26:44.485744 ignition[884]: Ignition finished successfully Jan 26 18:26:44.515987 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 26 18:26:44.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:44.528175 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 26 18:26:44.559800 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 26 18:26:44.647240 ignition[895]: Ignition 2.24.0 Jan 26 18:26:44.647315 ignition[895]: Stage: kargs Jan 26 18:26:44.661635 ignition[895]: no configs at "/usr/lib/ignition/base.d" Jan 26 18:26:44.661752 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 26 18:26:44.680163 ignition[895]: kargs: kargs passed Jan 26 18:26:44.680292 ignition[895]: Ignition finished successfully Jan 26 18:26:44.694288 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 26 18:26:44.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:44.712937 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 26 18:26:44.769026 ignition[902]: Ignition 2.24.0 Jan 26 18:26:44.769103 ignition[902]: Stage: disks Jan 26 18:26:44.769977 ignition[902]: no configs at "/usr/lib/ignition/base.d" Jan 26 18:26:44.769993 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 26 18:26:44.815956 ignition[902]: disks: disks passed Jan 26 18:26:44.816371 ignition[902]: Ignition finished successfully Jan 26 18:26:44.830739 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 26 18:26:44.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:44.832070 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 26 18:26:44.856009 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 26 18:26:44.856188 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 26 18:26:44.874018 systemd[1]: Reached target sysinit.target - System Initialization. Jan 26 18:26:44.895186 systemd[1]: Reached target basic.target - Basic System. Jan 26 18:26:44.909984 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 26 18:26:45.004106 systemd-fsck[911]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 26 18:26:45.019220 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 26 18:26:45.064756 kernel: kauditd_printk_skb: 21 callbacks suppressed Jan 26 18:26:45.064783 kernel: audit: type=1130 audit(1769452005.023:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:45.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:45.027151 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 26 18:26:45.387854 kernel: EXT4-fs (vda9): mounted filesystem 348114a3-4c6d-4729-be31-f084b711617b r/w with ordered data mode. Quota mode: none. Jan 26 18:26:45.388861 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 26 18:26:45.396022 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 26 18:26:45.412638 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 26 18:26:45.437938 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 26 18:26:45.451621 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 26 18:26:45.480895 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (919) Jan 26 18:26:45.451775 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 26 18:26:45.451824 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 26 18:26:45.535169 kernel: BTRFS info (device vda6): first mount of filesystem 74befb1c-e259-44ed-b7ad-b49e24b3bbbe Jan 26 18:26:45.535192 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 26 18:26:45.535210 kernel: BTRFS info (device vda6): turning on async discard Jan 26 18:26:45.535221 kernel: BTRFS info (device vda6): enabling free space tree Jan 26 18:26:45.474998 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 26 18:26:45.488566 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 26 18:26:45.555187 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 26 18:26:45.939314 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 26 18:26:45.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:45.955614 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 26 18:26:45.981560 kernel: audit: type=1130 audit(1769452005.952:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:45.970063 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 26 18:26:46.029987 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 26 18:26:46.044365 kernel: BTRFS info (device vda6): last unmount of filesystem 74befb1c-e259-44ed-b7ad-b49e24b3bbbe Jan 26 18:26:46.077849 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 26 18:26:46.105258 kernel: audit: type=1130 audit(1769452006.083:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:46.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:46.109726 ignition[1019]: INFO : Ignition 2.24.0 Jan 26 18:26:46.109726 ignition[1019]: INFO : Stage: mount Jan 26 18:26:46.122034 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 26 18:26:46.122034 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 26 18:26:46.122034 ignition[1019]: INFO : mount: mount passed Jan 26 18:26:46.122034 ignition[1019]: INFO : Ignition finished successfully Jan 26 18:26:46.158024 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 26 18:26:46.197101 kernel: audit: type=1130 audit(1769452006.164:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:46.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:46.167013 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 26 18:26:46.393894 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 26 18:26:46.460093 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1030) Jan 26 18:26:46.477795 kernel: BTRFS info (device vda6): first mount of filesystem 74befb1c-e259-44ed-b7ad-b49e24b3bbbe Jan 26 18:26:46.477828 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 26 18:26:46.504147 kernel: BTRFS info (device vda6): turning on async discard Jan 26 18:26:46.504187 kernel: BTRFS info (device vda6): enabling free space tree Jan 26 18:26:46.508036 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 26 18:26:46.586184 ignition[1047]: INFO : Ignition 2.24.0 Jan 26 18:26:46.586184 ignition[1047]: INFO : Stage: files Jan 26 18:26:46.596953 ignition[1047]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 26 18:26:46.596953 ignition[1047]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 26 18:26:46.619959 ignition[1047]: DEBUG : files: compiled without relabeling support, skipping Jan 26 18:26:46.628870 ignition[1047]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 26 18:26:46.628870 ignition[1047]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 26 18:26:46.657184 ignition[1047]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 26 18:26:46.665786 ignition[1047]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 26 18:26:46.665786 ignition[1047]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 26 18:26:46.659383 unknown[1047]: wrote ssh authorized keys file for user: core Jan 26 18:26:46.688988 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 26 18:26:46.688988 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 26 18:26:46.751980 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 26 18:26:46.920832 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 26 18:26:46.920832 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 26 18:26:46.951296 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 26 18:26:46.951296 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 26 18:26:46.951296 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 26 18:26:46.951296 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 26 18:26:46.951296 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 26 18:26:46.951296 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 26 18:26:46.951296 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 26 18:26:46.951296 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 26 18:26:46.951296 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 26 18:26:46.951296 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 26 18:26:46.951296 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 26 18:26:46.951296 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 26 18:26:46.951296 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 26 18:26:47.303159 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 26 18:26:47.764640 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 26 18:26:47.764640 ignition[1047]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 26 18:26:47.791898 ignition[1047]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 26 18:26:47.811968 ignition[1047]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 26 18:26:47.811968 ignition[1047]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 26 18:26:47.811968 ignition[1047]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 26 18:26:47.811968 ignition[1047]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 26 18:26:47.811968 ignition[1047]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 26 18:26:47.811968 ignition[1047]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 26 18:26:47.811968 ignition[1047]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 26 18:26:47.920575 ignition[1047]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 26 18:26:47.944391 ignition[1047]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 26 18:26:47.956369 ignition[1047]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 26 18:26:47.956369 ignition[1047]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 26 18:26:47.956369 ignition[1047]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 26 18:26:47.956369 ignition[1047]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 26 18:26:47.956369 ignition[1047]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 26 18:26:47.956369 ignition[1047]: INFO : files: files passed Jan 26 18:26:47.956369 ignition[1047]: INFO : Ignition finished successfully Jan 26 18:26:48.031351 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 26 18:26:48.042154 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 26 18:26:48.093077 kernel: audit: type=1130 audit(1769452008.039:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:48.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:48.049578 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 26 18:26:48.124277 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 26 18:26:48.124721 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 26 18:26:48.131935 initrd-setup-root-after-ignition[1078]: grep: /sysroot/oem/oem-release: No such file or directory Jan 26 18:26:48.140087 initrd-setup-root-after-ignition[1080]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 26 18:26:48.140087 initrd-setup-root-after-ignition[1080]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 26 18:26:48.178606 initrd-setup-root-after-ignition[1084]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 26 18:26:48.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:48.181361 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 26 18:26:48.264626 kernel: audit: type=1130 audit(1769452008.180:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:48.264762 kernel: audit: type=1131 audit(1769452008.180:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:48.264785 kernel: audit: type=1130 audit(1769452008.230:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:48.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:48.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:48.264824 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 26 18:26:48.285332 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 26 18:26:48.400350 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 26 18:26:48.400816 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 26 18:26:48.457983 kernel: audit: type=1130 audit(1769452008.415:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:48.458024 kernel: audit: type=1131 audit(1769452008.415:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:48.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:48.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:48.416125 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 26 18:26:48.458322 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 26 18:26:48.472640 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 26 18:26:48.474223 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 26 18:26:48.562359 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 26 18:26:48.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:48.573916 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 26 18:26:48.629238 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 26 18:26:48.629745 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 26 18:26:48.644399 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 26 18:26:48.644822 systemd[1]: Stopped target timers.target - Timer Units. Jan 26 18:26:48.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:48.659957 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 26 18:26:48.660200 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 26 18:26:48.683886 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 26 18:26:48.684762 systemd[1]: Stopped target basic.target - Basic System. Jan 26 18:26:48.703230 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 26 18:26:48.716916 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 26 18:26:48.717203 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 26 18:26:48.739798 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 26 18:26:48.759025 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 26 18:26:48.759218 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 26 18:26:48.772159 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 26 18:26:48.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:48.800163 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 26 18:26:48.815130 systemd[1]: Stopped target swap.target - Swaps. Jan 26 18:26:48.828100 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 26 18:26:48.828239 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 26 18:26:48.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:48.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:48.840140 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 26 18:26:48.849086 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 26 18:26:48.861834 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 26 18:26:48.864053 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 26 18:26:48.874218 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 26 18:26:48.874398 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 26 18:26:48.896962 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 26 18:26:48.897159 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 26 18:26:49.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:48.906638 systemd[1]: Stopped target paths.target - Path Units. Jan 26 18:26:49.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:48.913875 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 26 18:26:48.917230 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 26 18:26:48.933952 systemd[1]: Stopped target slices.target - Slice Units. Jan 26 18:26:48.934354 systemd[1]: Stopped target sockets.target - Socket Units. Jan 26 18:26:49.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:48.959312 systemd[1]: iscsid.socket: Deactivated successfully. Jan 26 18:26:48.959785 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 26 18:26:49.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:48.975284 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 26 18:26:49.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:48.975573 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 26 18:26:49.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:48.998117 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 26 18:26:48.998339 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 26 18:26:49.013262 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 26 18:26:49.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:49.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:49.013859 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 26 18:26:49.240281 ignition[1104]: INFO : Ignition 2.24.0 Jan 26 18:26:49.240281 ignition[1104]: INFO : Stage: umount Jan 26 18:26:49.240281 ignition[1104]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 26 18:26:49.240281 ignition[1104]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 26 18:26:49.240281 ignition[1104]: INFO : umount: umount passed Jan 26 18:26:49.240281 ignition[1104]: INFO : Ignition finished successfully Jan 26 18:26:49.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:49.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:49.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:49.049205 systemd[1]: ignition-files.service: Deactivated successfully. Jan 26 18:26:49.049606 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 26 18:26:49.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:49.069121 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 26 18:26:49.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:49.074755 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 26 18:26:49.075022 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 26 18:26:49.094403 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 26 18:26:49.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:49.132887 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 26 18:26:49.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:49.133194 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 26 18:26:49.144292 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 26 18:26:49.144787 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 26 18:26:49.158369 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 26 18:26:49.158838 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 26 18:26:49.187652 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 26 18:26:49.195346 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 26 18:26:49.233863 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 26 18:26:49.234067 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 26 18:26:49.250905 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 26 18:26:49.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:49.251300 systemd[1]: Stopped target network.target - Network. Jan 26 18:26:49.268284 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 26 18:26:49.268380 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 26 18:26:49.292564 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 26 18:26:49.292640 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 26 18:26:49.299357 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 26 18:26:49.299551 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 26 18:26:49.329932 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 26 18:26:49.330012 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 26 18:26:49.576000 audit: BPF prog-id=6 op=UNLOAD Jan 26 18:26:49.338817 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 26 18:26:49.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:49.357653 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 26 18:26:49.383374 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 26 18:26:49.383786 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 26 18:26:49.610000 audit: BPF prog-id=9 op=UNLOAD Jan 26 18:26:49.393800 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 26 18:26:49.393904 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 26 18:26:49.493024 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 26 18:26:49.493275 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 26 18:26:49.575085 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 26 18:26:49.575320 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 26 18:26:49.611018 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 26 18:26:49.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:49.615978 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 26 18:26:49.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:49.616025 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 26 18:26:49.645925 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 26 18:26:49.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:49.670622 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 26 18:26:49.670775 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 26 18:26:49.686320 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 26 18:26:49.686381 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 26 18:26:49.694306 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 26 18:26:49.694368 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 26 18:26:49.723247 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 26 18:26:49.797015 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 26 18:26:49.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:49.797174 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 26 18:26:49.824324 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 26 18:26:49.831737 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 26 18:26:49.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:49.841626 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 26 18:26:49.841788 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 26 18:26:49.850059 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 26 18:26:49.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:49.850106 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 26 18:26:49.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:49.864650 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 26 18:26:49.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:49.864812 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 26 18:26:49.886079 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 26 18:26:49.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:49.886157 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 26 18:26:49.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:49.899651 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 26 18:26:49.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:49.899803 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 26 18:26:49.911943 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 26 18:26:49.920004 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 26 18:26:49.920067 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 26 18:26:49.932580 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 26 18:26:49.932653 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 26 18:26:49.944618 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 26 18:26:49.944745 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 26 18:26:50.069905 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 26 18:26:50.116928 kernel: kauditd_printk_skb: 35 callbacks suppressed Jan 26 18:26:50.116960 kernel: audit: type=1130 audit(1769452010.078:78): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:50.116974 kernel: audit: type=1131 audit(1769452010.078:79): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:50.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:50.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:50.070168 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 26 18:26:50.078950 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 26 18:26:50.132146 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 26 18:26:50.181060 systemd[1]: Switching root. Jan 26 18:26:50.226931 systemd-journald[320]: Journal stopped Jan 26 18:26:52.832810 systemd-journald[320]: Received SIGTERM from PID 1 (systemd). Jan 26 18:26:52.832874 kernel: audit: type=1335 audit(1769452010.233:80): pid=320 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=kernel comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" nl-mcgrp=1 op=disconnect res=1 Jan 26 18:26:52.832903 kernel: SELinux: policy capability network_peer_controls=1 Jan 26 18:26:52.832923 kernel: SELinux: policy capability open_perms=1 Jan 26 18:26:52.832942 kernel: SELinux: policy capability extended_socket_class=1 Jan 26 18:26:52.832957 kernel: SELinux: policy capability always_check_network=0 Jan 26 18:26:52.832973 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 26 18:26:52.832984 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 26 18:26:52.832995 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 26 18:26:52.833006 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 26 18:26:52.833021 kernel: SELinux: policy capability userspace_initial_context=0 Jan 26 18:26:52.833036 kernel: audit: type=1403 audit(1769452010.452:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 26 18:26:52.833048 systemd[1]: Successfully loaded SELinux policy in 112.789ms. Jan 26 18:26:52.833073 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.490ms. Jan 26 18:26:52.833085 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 26 18:26:52.833097 systemd[1]: Detected virtualization kvm. Jan 26 18:26:52.833109 systemd[1]: Detected architecture x86-64. Jan 26 18:26:52.833123 systemd[1]: Detected first boot. Jan 26 18:26:52.833142 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 26 18:26:52.833162 kernel: audit: type=1334 audit(1769452010.613:82): prog-id=10 op=LOAD Jan 26 18:26:52.833180 kernel: audit: type=1334 audit(1769452010.613:83): prog-id=10 op=UNLOAD Jan 26 18:26:52.833192 kernel: audit: type=1334 audit(1769452010.613:84): prog-id=11 op=LOAD Jan 26 18:26:52.833203 kernel: audit: type=1334 audit(1769452010.613:85): prog-id=11 op=UNLOAD Jan 26 18:26:52.833215 zram_generator::config[1149]: No configuration found. Jan 26 18:26:52.833230 kernel: Guest personality initialized and is inactive Jan 26 18:26:52.833245 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 26 18:26:52.833258 kernel: Initialized host personality Jan 26 18:26:52.833269 kernel: NET: Registered PF_VSOCK protocol family Jan 26 18:26:52.833280 systemd[1]: Populated /etc with preset unit settings. Jan 26 18:26:52.833291 kernel: audit: type=1334 audit(1769452011.483:86): prog-id=12 op=LOAD Jan 26 18:26:52.833302 kernel: audit: type=1334 audit(1769452011.483:87): prog-id=3 op=UNLOAD Jan 26 18:26:52.833313 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 26 18:26:52.833324 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 26 18:26:52.833336 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 26 18:26:52.833356 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 26 18:26:52.833379 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 26 18:26:52.833401 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 26 18:26:52.833553 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 26 18:26:52.833567 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 26 18:26:52.833587 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 26 18:26:52.833615 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 26 18:26:52.833636 systemd[1]: Created slice user.slice - User and Session Slice. Jan 26 18:26:52.833656 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 26 18:26:52.833742 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 26 18:26:52.833756 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 26 18:26:52.833768 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 26 18:26:52.833783 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 26 18:26:52.833796 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 26 18:26:52.833816 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 26 18:26:52.833827 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 26 18:26:52.833841 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 26 18:26:52.833852 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 26 18:26:52.833864 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 26 18:26:52.833875 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 26 18:26:52.833887 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 26 18:26:52.833900 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 26 18:26:52.833913 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 26 18:26:52.833936 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 26 18:26:52.833957 systemd[1]: Reached target slices.target - Slice Units. Jan 26 18:26:52.833974 systemd[1]: Reached target swap.target - Swaps. Jan 26 18:26:52.833986 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 26 18:26:52.833998 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 26 18:26:52.834010 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 26 18:26:52.834022 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 26 18:26:52.834037 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 26 18:26:52.834048 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 26 18:26:52.834060 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 26 18:26:52.834071 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 26 18:26:52.834083 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 26 18:26:52.834095 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 26 18:26:52.834107 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 26 18:26:52.834120 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 26 18:26:52.834135 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 26 18:26:52.834146 systemd[1]: Mounting media.mount - External Media Directory... Jan 26 18:26:52.834165 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 26 18:26:52.834186 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 26 18:26:52.834204 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 26 18:26:52.834216 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 26 18:26:52.834233 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 26 18:26:52.834245 systemd[1]: Reached target machines.target - Containers. Jan 26 18:26:52.834257 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 26 18:26:52.834269 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 26 18:26:52.834280 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 26 18:26:52.834292 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 26 18:26:52.834304 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 26 18:26:52.834319 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 26 18:26:52.834334 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 26 18:26:52.834346 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 26 18:26:52.834357 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 26 18:26:52.834377 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 26 18:26:52.834401 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 26 18:26:52.834551 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 26 18:26:52.834568 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 26 18:26:52.834579 systemd[1]: Stopped systemd-fsck-usr.service. Jan 26 18:26:52.834598 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 26 18:26:52.834618 kernel: fuse: init (API version 7.41) Jan 26 18:26:52.834642 kernel: ACPI: bus type drm_connector registered Jan 26 18:26:52.834662 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 26 18:26:52.834754 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 26 18:26:52.834767 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 26 18:26:52.834803 systemd-journald[1235]: Collecting audit messages is enabled. Jan 26 18:26:52.834825 systemd-journald[1235]: Journal started Jan 26 18:26:52.834848 systemd-journald[1235]: Runtime Journal (/run/log/journal/fa9db811465b4404a6a39b163932c8fb) is 6M, max 48M, 42M free. Jan 26 18:26:52.035000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 26 18:26:52.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:52.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:52.710000 audit: BPF prog-id=14 op=UNLOAD Jan 26 18:26:52.710000 audit: BPF prog-id=13 op=UNLOAD Jan 26 18:26:52.726000 audit: BPF prog-id=15 op=LOAD Jan 26 18:26:52.730000 audit: BPF prog-id=16 op=LOAD Jan 26 18:26:52.731000 audit: BPF prog-id=17 op=LOAD Jan 26 18:26:52.829000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 26 18:26:52.829000 audit[1235]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7fff63b76860 a2=4000 a3=0 items=0 ppid=1 pid=1235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:26:52.829000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 26 18:26:51.454927 systemd[1]: Queued start job for default target multi-user.target. Jan 26 18:26:51.485148 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 26 18:26:51.486862 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 26 18:26:51.487318 systemd[1]: systemd-journald.service: Consumed 2.457s CPU time. Jan 26 18:26:52.856039 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 26 18:26:52.871835 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 26 18:26:52.896052 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 26 18:26:52.918743 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 26 18:26:52.938999 systemd[1]: Started systemd-journald.service - Journal Service. Jan 26 18:26:52.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:52.941340 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 26 18:26:52.949305 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 26 18:26:52.957843 systemd[1]: Mounted media.mount - External Media Directory. Jan 26 18:26:52.966132 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 26 18:26:52.975846 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 26 18:26:52.985264 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 26 18:26:52.993258 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 26 18:26:53.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.003242 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 26 18:26:53.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.013944 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 26 18:26:53.014365 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 26 18:26:53.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.024839 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 26 18:26:53.025823 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 26 18:26:53.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.035045 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 26 18:26:53.035300 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 26 18:26:53.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.044149 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 26 18:26:53.044778 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 26 18:26:53.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.055822 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 26 18:26:53.056177 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 26 18:26:53.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.065938 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 26 18:26:53.066295 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 26 18:26:53.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.076203 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 26 18:26:53.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.086852 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 26 18:26:53.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.099789 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 26 18:26:53.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.111593 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 26 18:26:53.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.122797 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 26 18:26:53.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.151993 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 26 18:26:53.161773 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 26 18:26:53.173317 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 26 18:26:53.183877 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 26 18:26:53.192771 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 26 18:26:53.192874 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 26 18:26:53.205634 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 26 18:26:53.217047 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 26 18:26:53.217277 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 26 18:26:53.220144 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 26 18:26:53.232233 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 26 18:26:53.242217 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 26 18:26:53.244031 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 26 18:26:53.251163 systemd-journald[1235]: Time spent on flushing to /var/log/journal/fa9db811465b4404a6a39b163932c8fb is 21.708ms for 1187 entries. Jan 26 18:26:53.251163 systemd-journald[1235]: System Journal (/var/log/journal/fa9db811465b4404a6a39b163932c8fb) is 8M, max 163.5M, 155.5M free. Jan 26 18:26:53.318040 systemd-journald[1235]: Received client request to flush runtime journal. Jan 26 18:26:53.260794 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 26 18:26:53.262979 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 26 18:26:53.273803 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 26 18:26:53.286827 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 26 18:26:53.298227 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 26 18:26:53.307402 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 26 18:26:53.318359 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 26 18:26:53.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.329765 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 26 18:26:53.339545 kernel: loop1: detected capacity change from 0 to 50784 Jan 26 18:26:53.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.349314 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 26 18:26:53.363960 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 26 18:26:53.377628 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 26 18:26:53.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.400196 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 26 18:26:53.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.413000 audit: BPF prog-id=18 op=LOAD Jan 26 18:26:53.414000 audit: BPF prog-id=19 op=LOAD Jan 26 18:26:53.414000 audit: BPF prog-id=20 op=LOAD Jan 26 18:26:53.422806 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 26 18:26:53.425752 kernel: loop2: detected capacity change from 0 to 229808 Jan 26 18:26:53.439000 audit: BPF prog-id=21 op=LOAD Jan 26 18:26:53.443752 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 26 18:26:53.458664 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 26 18:26:53.470883 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 26 18:26:53.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.488000 audit: BPF prog-id=22 op=LOAD Jan 26 18:26:53.497000 audit: BPF prog-id=23 op=LOAD Jan 26 18:26:53.498000 audit: BPF prog-id=24 op=LOAD Jan 26 18:26:53.500561 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 26 18:26:53.512000 audit: BPF prog-id=25 op=LOAD Jan 26 18:26:53.512000 audit: BPF prog-id=26 op=LOAD Jan 26 18:26:53.512000 audit: BPF prog-id=27 op=LOAD Jan 26 18:26:53.517562 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 26 18:26:53.532140 kernel: loop3: detected capacity change from 0 to 111560 Jan 26 18:26:53.573853 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Jan 26 18:26:53.574338 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Jan 26 18:26:53.587909 kernel: loop4: detected capacity change from 0 to 50784 Jan 26 18:26:53.589039 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 26 18:26:53.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.627292 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 26 18:26:53.627403 systemd-nsresourced[1293]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 26 18:26:53.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.641869 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 26 18:26:53.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.662845 kernel: loop5: detected capacity change from 0 to 229808 Jan 26 18:26:53.718772 kernel: loop6: detected capacity change from 0 to 111560 Jan 26 18:26:53.755299 (sd-merge)[1297]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 26 18:26:53.761051 systemd-oomd[1286]: No swap; memory pressure usage will be degraded Jan 26 18:26:53.762390 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 26 18:26:53.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:53.777640 (sd-merge)[1297]: Merged extensions into '/usr'. Jan 26 18:26:53.784617 systemd[1]: Reload requested from client PID 1270 ('systemd-sysext') (unit systemd-sysext.service)... Jan 26 18:26:53.784781 systemd[1]: Reloading... Jan 26 18:26:53.808752 systemd-resolved[1289]: Positive Trust Anchors: Jan 26 18:26:53.808767 systemd-resolved[1289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 26 18:26:53.808772 systemd-resolved[1289]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 26 18:26:53.808798 systemd-resolved[1289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 26 18:26:53.817795 systemd-resolved[1289]: Defaulting to hostname 'linux'. Jan 26 18:26:53.912785 zram_generator::config[1348]: No configuration found. Jan 26 18:26:54.150941 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 26 18:26:54.151559 systemd[1]: Reloading finished in 366 ms. Jan 26 18:26:54.193846 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 26 18:26:54.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:54.205860 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 26 18:26:54.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:54.218392 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 26 18:26:54.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:54.247279 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 26 18:26:54.277263 systemd[1]: Starting ensure-sysext.service... Jan 26 18:26:54.285362 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 26 18:26:54.295000 audit: BPF prog-id=8 op=UNLOAD Jan 26 18:26:54.295000 audit: BPF prog-id=7 op=UNLOAD Jan 26 18:26:54.298000 audit: BPF prog-id=28 op=LOAD Jan 26 18:26:54.304000 audit: BPF prog-id=29 op=LOAD Jan 26 18:26:54.306355 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 26 18:26:54.318000 audit: BPF prog-id=30 op=LOAD Jan 26 18:26:54.318000 audit: BPF prog-id=15 op=UNLOAD Jan 26 18:26:54.318000 audit: BPF prog-id=31 op=LOAD Jan 26 18:26:54.319000 audit: BPF prog-id=32 op=LOAD Jan 26 18:26:54.319000 audit: BPF prog-id=16 op=UNLOAD Jan 26 18:26:54.319000 audit: BPF prog-id=17 op=UNLOAD Jan 26 18:26:54.320000 audit: BPF prog-id=33 op=LOAD Jan 26 18:26:54.320000 audit: BPF prog-id=18 op=UNLOAD Jan 26 18:26:54.321000 audit: BPF prog-id=34 op=LOAD Jan 26 18:26:54.321000 audit: BPF prog-id=35 op=LOAD Jan 26 18:26:54.321000 audit: BPF prog-id=19 op=UNLOAD Jan 26 18:26:54.321000 audit: BPF prog-id=20 op=UNLOAD Jan 26 18:26:54.324000 audit: BPF prog-id=36 op=LOAD Jan 26 18:26:54.324000 audit: BPF prog-id=22 op=UNLOAD Jan 26 18:26:54.324000 audit: BPF prog-id=37 op=LOAD Jan 26 18:26:54.324000 audit: BPF prog-id=38 op=LOAD Jan 26 18:26:54.324000 audit: BPF prog-id=23 op=UNLOAD Jan 26 18:26:54.324000 audit: BPF prog-id=24 op=UNLOAD Jan 26 18:26:54.325000 audit: BPF prog-id=39 op=LOAD Jan 26 18:26:54.325000 audit: BPF prog-id=25 op=UNLOAD Jan 26 18:26:54.326000 audit: BPF prog-id=40 op=LOAD Jan 26 18:26:54.326000 audit: BPF prog-id=41 op=LOAD Jan 26 18:26:54.326000 audit: BPF prog-id=26 op=UNLOAD Jan 26 18:26:54.326000 audit: BPF prog-id=27 op=UNLOAD Jan 26 18:26:54.327000 audit: BPF prog-id=42 op=LOAD Jan 26 18:26:54.327000 audit: BPF prog-id=21 op=UNLOAD Jan 26 18:26:54.335825 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 26 18:26:54.335860 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 26 18:26:54.336155 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 26 18:26:54.338163 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Jan 26 18:26:54.338239 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Jan 26 18:26:54.338938 systemd[1]: Reload requested from client PID 1379 ('systemctl') (unit ensure-sysext.service)... Jan 26 18:26:54.339030 systemd[1]: Reloading... Jan 26 18:26:54.349185 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Jan 26 18:26:54.349261 systemd-tmpfiles[1380]: Skipping /boot Jan 26 18:26:54.365158 systemd-udevd[1381]: Using default interface naming scheme 'v257'. Jan 26 18:26:54.380185 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Jan 26 18:26:54.380205 systemd-tmpfiles[1380]: Skipping /boot Jan 26 18:26:54.440593 zram_generator::config[1413]: No configuration found. Jan 26 18:26:54.607580 kernel: mousedev: PS/2 mouse device common for all mice Jan 26 18:26:54.650795 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 26 18:26:54.664821 kernel: ACPI: button: Power Button [PWRF] Jan 26 18:26:54.689166 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 26 18:26:54.689902 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 26 18:26:54.707930 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 26 18:26:54.842239 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 26 18:26:54.842893 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 26 18:26:54.855821 systemd[1]: Reloading finished in 516 ms. Jan 26 18:26:54.870967 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 26 18:26:54.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:54.882000 audit: BPF prog-id=43 op=LOAD Jan 26 18:26:54.882000 audit: BPF prog-id=36 op=UNLOAD Jan 26 18:26:54.883000 audit: BPF prog-id=44 op=LOAD Jan 26 18:26:54.883000 audit: BPF prog-id=45 op=LOAD Jan 26 18:26:54.883000 audit: BPF prog-id=37 op=UNLOAD Jan 26 18:26:54.883000 audit: BPF prog-id=38 op=UNLOAD Jan 26 18:26:54.884000 audit: BPF prog-id=46 op=LOAD Jan 26 18:26:54.885000 audit: BPF prog-id=42 op=UNLOAD Jan 26 18:26:54.885000 audit: BPF prog-id=47 op=LOAD Jan 26 18:26:54.885000 audit: BPF prog-id=48 op=LOAD Jan 26 18:26:54.886000 audit: BPF prog-id=28 op=UNLOAD Jan 26 18:26:54.886000 audit: BPF prog-id=29 op=UNLOAD Jan 26 18:26:54.890000 audit: BPF prog-id=49 op=LOAD Jan 26 18:26:54.890000 audit: BPF prog-id=30 op=UNLOAD Jan 26 18:26:54.890000 audit: BPF prog-id=50 op=LOAD Jan 26 18:26:54.890000 audit: BPF prog-id=51 op=LOAD Jan 26 18:26:54.890000 audit: BPF prog-id=31 op=UNLOAD Jan 26 18:26:54.890000 audit: BPF prog-id=32 op=UNLOAD Jan 26 18:26:54.892000 audit: BPF prog-id=52 op=LOAD Jan 26 18:26:54.895000 audit: BPF prog-id=39 op=UNLOAD Jan 26 18:26:54.895000 audit: BPF prog-id=53 op=LOAD Jan 26 18:26:54.895000 audit: BPF prog-id=54 op=LOAD Jan 26 18:26:54.895000 audit: BPF prog-id=40 op=UNLOAD Jan 26 18:26:54.895000 audit: BPF prog-id=41 op=UNLOAD Jan 26 18:26:54.896000 audit: BPF prog-id=55 op=LOAD Jan 26 18:26:54.896000 audit: BPF prog-id=33 op=UNLOAD Jan 26 18:26:54.896000 audit: BPF prog-id=56 op=LOAD Jan 26 18:26:54.896000 audit: BPF prog-id=57 op=LOAD Jan 26 18:26:54.896000 audit: BPF prog-id=34 op=UNLOAD Jan 26 18:26:54.896000 audit: BPF prog-id=35 op=UNLOAD Jan 26 18:26:54.901950 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 26 18:26:54.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:55.192995 systemd[1]: Finished ensure-sysext.service. Jan 26 18:26:55.221804 kernel: kauditd_printk_skb: 123 callbacks suppressed Jan 26 18:26:55.221899 kernel: audit: type=1130 audit(1769452015.200:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:55.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:55.220624 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 26 18:26:55.223021 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 26 18:26:55.283538 kernel: kvm_amd: TSC scaling supported Jan 26 18:26:55.283645 kernel: kvm_amd: Nested Virtualization enabled Jan 26 18:26:55.283661 kernel: kvm_amd: Nested Paging enabled Jan 26 18:26:55.291910 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 26 18:26:55.291985 kernel: kvm_amd: PMU virtualization is disabled Jan 26 18:26:55.308186 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 26 18:26:55.319131 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 26 18:26:55.324778 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 26 18:26:55.339344 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 26 18:26:55.352021 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 26 18:26:55.365098 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 26 18:26:55.365603 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 26 18:26:55.365819 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 26 18:26:55.367985 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 26 18:26:55.395299 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 26 18:26:55.406232 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 26 18:26:55.412600 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 26 18:26:55.436177 kernel: audit: type=1334 audit(1769452015.426:210): prog-id=58 op=LOAD Jan 26 18:26:55.426000 audit: BPF prog-id=58 op=LOAD Jan 26 18:26:55.458995 kernel: audit: type=1334 audit(1769452015.437:211): prog-id=59 op=LOAD Jan 26 18:26:55.437000 audit: BPF prog-id=59 op=LOAD Jan 26 18:26:55.436897 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 26 18:26:55.447087 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 26 18:26:55.471264 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 26 18:26:55.490918 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 26 18:26:55.500603 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 26 18:26:55.517201 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 26 18:26:55.557319 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 26 18:26:55.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:55.575811 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 26 18:26:55.576154 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 26 18:26:55.628010 kernel: audit: type=1130 audit(1769452015.567:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:55.628129 kernel: audit: type=1131 audit(1769452015.574:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:55.628151 kernel: audit: type=1127 audit(1769452015.577:214): pid=1516 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 26 18:26:55.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:55.577000 audit[1516]: SYSTEM_BOOT pid=1516 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 26 18:26:55.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:55.637038 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 26 18:26:55.637367 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 26 18:26:55.656943 kernel: audit: type=1130 audit(1769452015.635:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:55.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:55.679571 kernel: audit: type=1131 audit(1769452015.635:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:55.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:55.696296 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 26 18:26:55.714784 kernel: audit: type=1130 audit(1769452015.692:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:55.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:55.736554 kernel: audit: type=1131 audit(1769452015.692:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:55.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:26:55.773016 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 26 18:26:55.773000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 26 18:26:55.773000 audit[1530]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe50dfeb60 a2=420 a3=0 items=0 ppid=1495 pid=1530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:26:55.773000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 26 18:26:55.778130 augenrules[1530]: No rules Jan 26 18:26:55.776897 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 26 18:26:55.777993 systemd[1]: audit-rules.service: Deactivated successfully. Jan 26 18:26:55.778310 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 26 18:26:55.788620 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 26 18:26:55.798983 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 26 18:26:55.812230 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 26 18:26:55.812599 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 26 18:26:55.812838 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 26 18:26:55.823306 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 26 18:26:55.990605 kernel: EDAC MC: Ver: 3.0.0 Jan 26 18:26:56.006008 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 26 18:26:56.018979 systemd-networkd[1514]: lo: Link UP Jan 26 18:26:56.019036 systemd[1]: Reached target time-set.target - System Time Set. Jan 26 18:26:56.019069 systemd-networkd[1514]: lo: Gained carrier Jan 26 18:26:56.024594 systemd-networkd[1514]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 26 18:26:56.024766 systemd-networkd[1514]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 26 18:26:56.028267 systemd-networkd[1514]: eth0: Link UP Jan 26 18:26:56.030956 systemd-networkd[1514]: eth0: Gained carrier Jan 26 18:26:56.031068 systemd-networkd[1514]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 26 18:26:56.035311 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 26 18:26:56.047861 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 26 18:26:56.067089 systemd[1]: Reached target network.target - Network. Jan 26 18:26:56.077631 systemd-networkd[1514]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 26 18:26:56.078667 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 26 18:26:56.080622 systemd-timesyncd[1515]: Network configuration changed, trying to establish connection. Jan 26 18:26:57.119289 systemd-timesyncd[1515]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 26 18:26:57.119351 systemd-timesyncd[1515]: Initial clock synchronization to Mon 2026-01-26 18:26:57.119112 UTC. Jan 26 18:26:57.119484 systemd-resolved[1289]: Clock change detected. Flushing caches. Jan 26 18:26:57.126261 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 26 18:26:57.181328 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 26 18:26:57.646359 ldconfig[1507]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 26 18:26:57.654548 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 26 18:26:57.669464 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 26 18:26:57.713002 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 26 18:26:57.722887 systemd[1]: Reached target sysinit.target - System Initialization. Jan 26 18:26:57.732685 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 26 18:26:57.744919 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 26 18:26:57.755712 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 26 18:26:57.766347 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 26 18:26:57.776562 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 26 18:26:57.789209 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 26 18:26:57.801459 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 26 18:26:57.810727 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 26 18:26:57.822130 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 26 18:26:57.822231 systemd[1]: Reached target paths.target - Path Units. Jan 26 18:26:57.829942 systemd[1]: Reached target timers.target - Timer Units. Jan 26 18:26:57.838423 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 26 18:26:57.851337 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 26 18:26:57.863137 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 26 18:26:57.873292 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 26 18:26:57.883459 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 26 18:26:57.895387 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 26 18:26:57.903942 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 26 18:26:57.914395 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 26 18:26:57.923700 systemd[1]: Reached target sockets.target - Socket Units. Jan 26 18:26:57.931369 systemd[1]: Reached target basic.target - Basic System. Jan 26 18:26:57.938879 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 26 18:26:57.938987 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 26 18:26:57.940683 systemd[1]: Starting containerd.service - containerd container runtime... Jan 26 18:26:57.974997 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 26 18:26:57.985451 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 26 18:26:58.008531 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 26 18:26:58.020237 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 26 18:26:58.029567 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 26 18:26:58.040697 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 26 18:26:58.053935 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 26 18:26:58.065426 oslogin_cache_refresh[1567]: Refreshing passwd entry cache Jan 26 18:26:58.072494 jq[1565]: false Jan 26 18:26:58.072723 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Refreshing passwd entry cache Jan 26 18:26:58.067007 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 26 18:26:58.079220 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 26 18:26:58.089982 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Failure getting users, quitting Jan 26 18:26:58.089982 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 26 18:26:58.089982 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Refreshing group entry cache Jan 26 18:26:58.089177 oslogin_cache_refresh[1567]: Failure getting users, quitting Jan 26 18:26:58.089198 oslogin_cache_refresh[1567]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 26 18:26:58.089251 oslogin_cache_refresh[1567]: Refreshing group entry cache Jan 26 18:26:58.090641 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 26 18:26:58.108143 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 26 18:26:58.116254 oslogin_cache_refresh[1567]: Failure getting groups, quitting Jan 26 18:26:58.116998 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Failure getting groups, quitting Jan 26 18:26:58.116998 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 26 18:26:58.115550 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 26 18:26:58.116269 oslogin_cache_refresh[1567]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 26 18:26:58.116430 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 26 18:26:58.119596 extend-filesystems[1566]: Found /dev/vda6 Jan 26 18:26:58.120108 systemd[1]: Starting update-engine.service - Update Engine... Jan 26 18:26:58.127860 extend-filesystems[1566]: Found /dev/vda9 Jan 26 18:26:58.142470 extend-filesystems[1566]: Checking size of /dev/vda9 Jan 26 18:26:58.156137 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 26 18:26:58.171428 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 26 18:26:58.178297 update_engine[1582]: I20260126 18:26:58.177716 1582 main.cc:92] Flatcar Update Engine starting Jan 26 18:26:58.181570 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 26 18:26:58.184420 extend-filesystems[1566]: Resized partition /dev/vda9 Jan 26 18:26:58.186114 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 26 18:26:58.186567 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 26 18:26:58.187256 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 26 18:26:58.196157 jq[1589]: true Jan 26 18:26:58.199198 extend-filesystems[1596]: resize2fs 1.47.3 (8-Jul-2025) Jan 26 18:26:58.200931 systemd[1]: motdgen.service: Deactivated successfully. Jan 26 18:26:58.201443 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 26 18:26:58.214003 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 26 18:26:58.232939 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 26 18:26:58.233295 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 26 18:26:58.296229 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 26 18:26:58.325727 extend-filesystems[1596]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 26 18:26:58.325727 extend-filesystems[1596]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 26 18:26:58.325727 extend-filesystems[1596]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 26 18:26:58.364892 extend-filesystems[1566]: Resized filesystem in /dev/vda9 Jan 26 18:26:58.345432 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 26 18:26:58.365183 jq[1601]: true Jan 26 18:26:58.345932 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 26 18:26:58.362523 systemd-logind[1580]: Watching system buttons on /dev/input/event2 (Power Button) Jan 26 18:26:58.362545 systemd-logind[1580]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 26 18:26:58.364274 systemd-logind[1580]: New seat seat0. Jan 26 18:26:58.373288 systemd[1]: Started systemd-logind.service - User Login Management. Jan 26 18:26:58.377129 dbus-daemon[1563]: [system] SELinux support is enabled Jan 26 18:26:58.382631 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 26 18:26:58.384115 update_engine[1582]: I20260126 18:26:58.383982 1582 update_check_scheduler.cc:74] Next update check in 7m46s Jan 26 18:26:58.402710 tar[1599]: linux-amd64/LICENSE Jan 26 18:26:58.403367 tar[1599]: linux-amd64/helm Jan 26 18:26:58.403552 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 26 18:26:58.403654 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 26 18:26:58.405456 dbus-daemon[1563]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 26 18:26:58.414963 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 26 18:26:58.415137 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 26 18:26:58.426135 systemd[1]: Started update-engine.service - Update Engine. Jan 26 18:26:58.438680 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 26 18:26:58.517922 bash[1634]: Updated "/home/core/.ssh/authorized_keys" Jan 26 18:26:58.522304 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 26 18:26:58.534602 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 26 18:26:58.546104 locksmithd[1630]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 26 18:26:58.734155 sshd_keygen[1588]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 26 18:26:58.756644 containerd[1602]: time="2026-01-26T18:26:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 26 18:26:58.762722 containerd[1602]: time="2026-01-26T18:26:58.762661603Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 26 18:26:58.787571 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 26 18:26:58.807267 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 26 18:26:58.892446 systemd[1]: issuegen.service: Deactivated successfully. Jan 26 18:26:58.893215 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 26 18:26:58.908175 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 26 18:26:58.926094 tar[1599]: linux-amd64/README.md Jan 26 18:26:58.928525 containerd[1602]: time="2026-01-26T18:26:58.928388833Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.37µs" Jan 26 18:26:58.928525 containerd[1602]: time="2026-01-26T18:26:58.928496124Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 26 18:26:58.928635 containerd[1602]: time="2026-01-26T18:26:58.928544665Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 26 18:26:58.928635 containerd[1602]: time="2026-01-26T18:26:58.928560084Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 26 18:26:58.929198 containerd[1602]: time="2026-01-26T18:26:58.929003691Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 26 18:26:58.929198 containerd[1602]: time="2026-01-26T18:26:58.929185992Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 26 18:26:58.929361 containerd[1602]: time="2026-01-26T18:26:58.929272975Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 26 18:26:58.929387 containerd[1602]: time="2026-01-26T18:26:58.929356710Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 26 18:26:58.930100 containerd[1602]: time="2026-01-26T18:26:58.929632135Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 26 18:26:58.930100 containerd[1602]: time="2026-01-26T18:26:58.929947623Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 26 18:26:58.930100 containerd[1602]: time="2026-01-26T18:26:58.929966990Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 26 18:26:58.930100 containerd[1602]: time="2026-01-26T18:26:58.929978080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 26 18:26:58.930408 containerd[1602]: time="2026-01-26T18:26:58.930293249Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 26 18:26:58.930408 containerd[1602]: time="2026-01-26T18:26:58.930391051Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 26 18:26:58.930618 containerd[1602]: time="2026-01-26T18:26:58.930512298Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 26 18:26:58.931202 containerd[1602]: time="2026-01-26T18:26:58.931012772Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 26 18:26:58.931233 containerd[1602]: time="2026-01-26T18:26:58.931210161Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 26 18:26:58.931233 containerd[1602]: time="2026-01-26T18:26:58.931225660Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 26 18:26:58.931423 containerd[1602]: time="2026-01-26T18:26:58.931338040Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 26 18:26:58.931726 containerd[1602]: time="2026-01-26T18:26:58.931620758Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 26 18:26:58.932003 containerd[1602]: time="2026-01-26T18:26:58.931919425Z" level=info msg="metadata content store policy set" policy=shared Jan 26 18:26:58.943655 containerd[1602]: time="2026-01-26T18:26:58.943532090Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 26 18:26:58.943695 containerd[1602]: time="2026-01-26T18:26:58.943663936Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 26 18:26:58.944011 containerd[1602]: time="2026-01-26T18:26:58.943905988Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 26 18:26:58.944011 containerd[1602]: time="2026-01-26T18:26:58.944002118Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 26 18:26:58.944145 containerd[1602]: time="2026-01-26T18:26:58.944093799Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 26 18:26:58.944145 containerd[1602]: time="2026-01-26T18:26:58.944110259Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 26 18:26:58.944145 containerd[1602]: time="2026-01-26T18:26:58.944127231Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 26 18:26:58.944145 containerd[1602]: time="2026-01-26T18:26:58.944136719Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 26 18:26:58.944256 containerd[1602]: time="2026-01-26T18:26:58.944150815Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 26 18:26:58.944256 containerd[1602]: time="2026-01-26T18:26:58.944168408Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 26 18:26:58.944256 containerd[1602]: time="2026-01-26T18:26:58.944179529Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 26 18:26:58.944256 containerd[1602]: time="2026-01-26T18:26:58.944189948Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 26 18:26:58.944256 containerd[1602]: time="2026-01-26T18:26:58.944199386Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 26 18:26:58.944256 containerd[1602]: time="2026-01-26T18:26:58.944210186Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 26 18:26:58.944387 containerd[1602]: time="2026-01-26T18:26:58.944341311Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 26 18:26:58.944387 containerd[1602]: time="2026-01-26T18:26:58.944364935Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 26 18:26:58.944387 containerd[1602]: time="2026-01-26T18:26:58.944377649Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 26 18:26:58.944387 containerd[1602]: time="2026-01-26T18:26:58.944387096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 26 18:26:58.944507 containerd[1602]: time="2026-01-26T18:26:58.944396674Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 26 18:26:58.944507 containerd[1602]: time="2026-01-26T18:26:58.944405782Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 26 18:26:58.944507 containerd[1602]: time="2026-01-26T18:26:58.944415930Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 26 18:26:58.944507 containerd[1602]: time="2026-01-26T18:26:58.944425247Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 26 18:26:58.944507 containerd[1602]: time="2026-01-26T18:26:58.944434054Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 26 18:26:58.944507 containerd[1602]: time="2026-01-26T18:26:58.944443101Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 26 18:26:58.944507 containerd[1602]: time="2026-01-26T18:26:58.944452318Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 26 18:26:58.944507 containerd[1602]: time="2026-01-26T18:26:58.944472716Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 26 18:26:58.944635 containerd[1602]: time="2026-01-26T18:26:58.944514765Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 26 18:26:58.944635 containerd[1602]: time="2026-01-26T18:26:58.944525955Z" level=info msg="Start snapshots syncer" Jan 26 18:26:58.944635 containerd[1602]: time="2026-01-26T18:26:58.944544149Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 26 18:26:58.945269 containerd[1602]: time="2026-01-26T18:26:58.945112952Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 26 18:26:58.945488 containerd[1602]: time="2026-01-26T18:26:58.945405758Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 26 18:26:58.946459 containerd[1602]: time="2026-01-26T18:26:58.945933343Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 26 18:26:58.946459 containerd[1602]: time="2026-01-26T18:26:58.946314265Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 26 18:26:58.946459 containerd[1602]: time="2026-01-26T18:26:58.946335514Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 26 18:26:58.946459 containerd[1602]: time="2026-01-26T18:26:58.946346064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 26 18:26:58.946459 containerd[1602]: time="2026-01-26T18:26:58.946355942Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 26 18:26:58.946459 containerd[1602]: time="2026-01-26T18:26:58.946365910Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 26 18:26:58.946459 containerd[1602]: time="2026-01-26T18:26:58.946375398Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 26 18:26:58.946459 containerd[1602]: time="2026-01-26T18:26:58.946384436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 26 18:26:58.946459 containerd[1602]: time="2026-01-26T18:26:58.946393472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 26 18:26:58.946459 containerd[1602]: time="2026-01-26T18:26:58.946402880Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 26 18:26:58.946640 containerd[1602]: time="2026-01-26T18:26:58.946509859Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 26 18:26:58.947246 containerd[1602]: time="2026-01-26T18:26:58.946592464Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 26 18:26:58.947246 containerd[1602]: time="2026-01-26T18:26:58.946961232Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 26 18:26:58.947246 containerd[1602]: time="2026-01-26T18:26:58.946974337Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 26 18:26:58.947246 containerd[1602]: time="2026-01-26T18:26:58.946982201Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 26 18:26:58.947397 containerd[1602]: time="2026-01-26T18:26:58.947375896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 26 18:26:58.947417 containerd[1602]: time="2026-01-26T18:26:58.947405842Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 26 18:26:58.947446 containerd[1602]: time="2026-01-26T18:26:58.947423836Z" level=info msg="runtime interface created" Jan 26 18:26:58.947446 containerd[1602]: time="2026-01-26T18:26:58.947429777Z" level=info msg="created NRI interface" Jan 26 18:26:58.947446 containerd[1602]: time="2026-01-26T18:26:58.947436980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 26 18:26:58.947446 containerd[1602]: time="2026-01-26T18:26:58.947447199Z" level=info msg="Connect containerd service" Jan 26 18:26:58.947507 containerd[1602]: time="2026-01-26T18:26:58.947465363Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 26 18:26:58.953597 containerd[1602]: time="2026-01-26T18:26:58.953388664Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 26 18:26:58.953400 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 26 18:26:58.964544 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 26 18:26:58.978966 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 26 18:26:58.990390 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 26 18:26:58.999525 systemd[1]: Reached target getty.target - Login Prompts. Jan 26 18:26:59.049324 systemd-networkd[1514]: eth0: Gained IPv6LL Jan 26 18:26:59.054537 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 26 18:26:59.066392 systemd[1]: Reached target network-online.target - Network is Online. Jan 26 18:26:59.080607 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 26 18:26:59.105411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 26 18:26:59.118146 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 26 18:26:59.178090 containerd[1602]: time="2026-01-26T18:26:59.177906994Z" level=info msg="Start subscribing containerd event" Jan 26 18:26:59.180616 containerd[1602]: time="2026-01-26T18:26:59.179893322Z" level=info msg="Start recovering state" Jan 26 18:26:59.180947 containerd[1602]: time="2026-01-26T18:26:59.179010809Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 26 18:26:59.181171 containerd[1602]: time="2026-01-26T18:26:59.181152433Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 26 18:26:59.183547 containerd[1602]: time="2026-01-26T18:26:59.182707817Z" level=info msg="Start event monitor" Jan 26 18:26:59.187218 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 26 18:26:59.197120 containerd[1602]: time="2026-01-26T18:26:59.188387250Z" level=info msg="Start cni network conf syncer for default" Jan 26 18:26:59.197120 containerd[1602]: time="2026-01-26T18:26:59.188419510Z" level=info msg="Start streaming server" Jan 26 18:26:59.197120 containerd[1602]: time="2026-01-26T18:26:59.188430150Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 26 18:26:59.197120 containerd[1602]: time="2026-01-26T18:26:59.193648517Z" level=info msg="runtime interface starting up..." Jan 26 18:26:59.197120 containerd[1602]: time="2026-01-26T18:26:59.193665008Z" level=info msg="starting plugins..." Jan 26 18:26:59.197120 containerd[1602]: time="2026-01-26T18:26:59.193686758Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 26 18:26:59.187635 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 26 18:26:59.197289 containerd[1602]: time="2026-01-26T18:26:59.197143452Z" level=info msg="containerd successfully booted in 0.445702s" Jan 26 18:26:59.199708 systemd[1]: Started containerd.service - containerd container runtime. Jan 26 18:26:59.218312 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 26 18:26:59.231656 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 26 18:26:59.322254 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 26 18:26:59.334413 systemd[1]: Started sshd@0-10.0.0.106:22-10.0.0.1:60766.service - OpenSSH per-connection server daemon (10.0.0.1:60766). Jan 26 18:26:59.523439 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 60766 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:26:59.530255 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:26:59.544994 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 26 18:26:59.556622 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 26 18:26:59.579163 systemd-logind[1580]: New session 1 of user core. Jan 26 18:26:59.610717 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 26 18:26:59.628549 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 26 18:26:59.675678 (systemd)[1706]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:26:59.686318 systemd-logind[1580]: New session 2 of user core. Jan 26 18:26:59.907302 systemd[1706]: Queued start job for default target default.target. Jan 26 18:26:59.919565 systemd[1706]: Created slice app.slice - User Application Slice. Jan 26 18:26:59.919696 systemd[1706]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 26 18:26:59.919716 systemd[1706]: Reached target paths.target - Paths. Jan 26 18:26:59.920146 systemd[1706]: Reached target timers.target - Timers. Jan 26 18:26:59.923403 systemd[1706]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 26 18:26:59.925498 systemd[1706]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 26 18:26:59.946585 systemd[1706]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 26 18:26:59.946712 systemd[1706]: Reached target sockets.target - Sockets. Jan 26 18:26:59.951512 systemd[1706]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 26 18:26:59.951950 systemd[1706]: Reached target basic.target - Basic System. Jan 26 18:26:59.952150 systemd[1706]: Reached target default.target - Main User Target. Jan 26 18:26:59.952187 systemd[1706]: Startup finished in 249ms. Jan 26 18:26:59.953015 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 26 18:26:59.974126 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 26 18:27:00.014411 systemd[1]: Started sshd@1-10.0.0.106:22-10.0.0.1:60778.service - OpenSSH per-connection server daemon (10.0.0.1:60778). Jan 26 18:27:00.129437 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 60778 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:27:00.134198 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:27:00.150465 systemd-logind[1580]: New session 3 of user core. Jan 26 18:27:00.171640 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 26 18:27:00.218392 sshd[1724]: Connection closed by 10.0.0.1 port 60778 Jan 26 18:27:00.219163 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Jan 26 18:27:00.232954 systemd[1]: sshd@1-10.0.0.106:22-10.0.0.1:60778.service: Deactivated successfully. Jan 26 18:27:00.236458 systemd[1]: session-3.scope: Deactivated successfully. Jan 26 18:27:00.238356 systemd-logind[1580]: Session 3 logged out. Waiting for processes to exit. Jan 26 18:27:00.242688 systemd-logind[1580]: Removed session 3. Jan 26 18:27:00.245160 systemd[1]: Started sshd@2-10.0.0.106:22-10.0.0.1:60792.service - OpenSSH per-connection server daemon (10.0.0.1:60792). Jan 26 18:27:00.340151 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 60792 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:27:00.343189 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:27:00.354281 systemd-logind[1580]: New session 4 of user core. Jan 26 18:27:00.370340 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 26 18:27:00.410276 sshd[1734]: Connection closed by 10.0.0.1 port 60792 Jan 26 18:27:00.410613 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Jan 26 18:27:00.417322 systemd[1]: sshd@2-10.0.0.106:22-10.0.0.1:60792.service: Deactivated successfully. Jan 26 18:27:00.421287 systemd[1]: session-4.scope: Deactivated successfully. Jan 26 18:27:00.423445 systemd-logind[1580]: Session 4 logged out. Waiting for processes to exit. Jan 26 18:27:00.427737 systemd-logind[1580]: Removed session 4. Jan 26 18:27:00.572349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 26 18:27:00.584137 (kubelet)[1744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 26 18:27:00.584162 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 26 18:27:00.594696 systemd[1]: Startup finished in 6.734s (kernel) + 11.596s (initrd) + 9.231s (userspace) = 27.561s. Jan 26 18:27:01.597890 kubelet[1744]: E0126 18:27:01.597462 1744 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 26 18:27:01.602532 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 26 18:27:01.603110 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 26 18:27:01.603690 systemd[1]: kubelet.service: Consumed 1.399s CPU time, 268.2M memory peak. Jan 26 18:27:10.438713 systemd[1]: Started sshd@3-10.0.0.106:22-10.0.0.1:33420.service - OpenSSH per-connection server daemon (10.0.0.1:33420). Jan 26 18:27:10.552591 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 33420 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:27:10.555730 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:27:10.569183 systemd-logind[1580]: New session 5 of user core. Jan 26 18:27:10.579321 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 26 18:27:10.612942 sshd[1761]: Connection closed by 10.0.0.1 port 33420 Jan 26 18:27:10.613707 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Jan 26 18:27:10.637690 systemd[1]: sshd@3-10.0.0.106:22-10.0.0.1:33420.service: Deactivated successfully. Jan 26 18:27:10.641660 systemd[1]: session-5.scope: Deactivated successfully. Jan 26 18:27:10.644018 systemd-logind[1580]: Session 5 logged out. Waiting for processes to exit. Jan 26 18:27:10.650646 systemd[1]: Started sshd@4-10.0.0.106:22-10.0.0.1:33430.service - OpenSSH per-connection server daemon (10.0.0.1:33430). Jan 26 18:27:10.651721 systemd-logind[1580]: Removed session 5. Jan 26 18:27:10.755002 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 33430 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:27:10.758412 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:27:10.770707 systemd-logind[1580]: New session 6 of user core. Jan 26 18:27:10.786391 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 26 18:27:10.809722 sshd[1771]: Connection closed by 10.0.0.1 port 33430 Jan 26 18:27:10.810629 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Jan 26 18:27:10.824392 systemd[1]: sshd@4-10.0.0.106:22-10.0.0.1:33430.service: Deactivated successfully. Jan 26 18:27:10.828253 systemd[1]: session-6.scope: Deactivated successfully. Jan 26 18:27:10.831176 systemd-logind[1580]: Session 6 logged out. Waiting for processes to exit. Jan 26 18:27:10.835524 systemd[1]: Started sshd@5-10.0.0.106:22-10.0.0.1:33444.service - OpenSSH per-connection server daemon (10.0.0.1:33444). Jan 26 18:27:10.837464 systemd-logind[1580]: Removed session 6. Jan 26 18:27:10.929493 sshd[1777]: Accepted publickey for core from 10.0.0.1 port 33444 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:27:10.932545 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:27:10.945224 systemd-logind[1580]: New session 7 of user core. Jan 26 18:27:10.955174 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 26 18:27:10.986020 sshd[1781]: Connection closed by 10.0.0.1 port 33444 Jan 26 18:27:10.986703 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Jan 26 18:27:10.998507 systemd[1]: sshd@5-10.0.0.106:22-10.0.0.1:33444.service: Deactivated successfully. Jan 26 18:27:11.001737 systemd[1]: session-7.scope: Deactivated successfully. Jan 26 18:27:11.004617 systemd-logind[1580]: Session 7 logged out. Waiting for processes to exit. Jan 26 18:27:11.009693 systemd[1]: Started sshd@6-10.0.0.106:22-10.0.0.1:33456.service - OpenSSH per-connection server daemon (10.0.0.1:33456). Jan 26 18:27:11.011266 systemd-logind[1580]: Removed session 7. Jan 26 18:27:11.094346 sshd[1787]: Accepted publickey for core from 10.0.0.1 port 33456 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:27:11.096339 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:27:11.108396 systemd-logind[1580]: New session 8 of user core. Jan 26 18:27:11.122203 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 26 18:27:11.174552 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 26 18:27:11.175542 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 26 18:27:11.208245 sudo[1792]: pam_unix(sudo:session): session closed for user root Jan 26 18:27:11.211528 sshd[1791]: Connection closed by 10.0.0.1 port 33456 Jan 26 18:27:11.211588 sshd-session[1787]: pam_unix(sshd:session): session closed for user core Jan 26 18:27:11.231315 systemd[1]: sshd@6-10.0.0.106:22-10.0.0.1:33456.service: Deactivated successfully. Jan 26 18:27:11.235507 systemd[1]: session-8.scope: Deactivated successfully. Jan 26 18:27:11.238276 systemd-logind[1580]: Session 8 logged out. Waiting for processes to exit. Jan 26 18:27:11.246662 systemd[1]: Started sshd@7-10.0.0.106:22-10.0.0.1:33466.service - OpenSSH per-connection server daemon (10.0.0.1:33466). Jan 26 18:27:11.248292 systemd-logind[1580]: Removed session 8. Jan 26 18:27:11.352596 sshd[1799]: Accepted publickey for core from 10.0.0.1 port 33466 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:27:11.355629 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:27:11.368215 systemd-logind[1580]: New session 9 of user core. Jan 26 18:27:11.390315 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 26 18:27:11.425577 sudo[1805]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 26 18:27:11.426429 sudo[1805]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 26 18:27:11.436253 sudo[1805]: pam_unix(sudo:session): session closed for user root Jan 26 18:27:11.453150 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 26 18:27:11.453591 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 26 18:27:11.469928 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 26 18:27:11.577000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 26 18:27:11.579016 augenrules[1829]: No rules Jan 26 18:27:11.581590 systemd[1]: audit-rules.service: Deactivated successfully. Jan 26 18:27:11.582276 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 26 18:27:11.584560 sudo[1804]: pam_unix(sudo:session): session closed for user root Jan 26 18:27:11.585047 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 26 18:27:11.585217 kernel: audit: type=1305 audit(1769452031.577:221): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 26 18:27:11.588163 sshd[1803]: Connection closed by 10.0.0.1 port 33466 Jan 26 18:27:11.591185 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Jan 26 18:27:11.577000 audit[1829]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff482597b0 a2=420 a3=0 items=0 ppid=1810 pid=1829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:11.631656 kernel: audit: type=1300 audit(1769452031.577:221): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff482597b0 a2=420 a3=0 items=0 ppid=1810 pid=1829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:11.631725 kernel: audit: type=1327 audit(1769452031.577:221): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 26 18:27:11.577000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 26 18:27:11.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:27:11.668049 kernel: audit: type=1130 audit(1769452031.582:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:27:11.668175 kernel: audit: type=1131 audit(1769452031.582:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:27:11.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:27:11.582000 audit[1804]: USER_END pid=1804 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 26 18:27:11.713280 kernel: audit: type=1106 audit(1769452031.582:224): pid=1804 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 26 18:27:11.713344 kernel: audit: type=1104 audit(1769452031.582:225): pid=1804 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 26 18:27:11.582000 audit[1804]: CRED_DISP pid=1804 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 26 18:27:11.735258 kernel: audit: type=1106 audit(1769452031.590:226): pid=1799 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:27:11.590000 audit[1799]: USER_END pid=1799 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:27:11.590000 audit[1799]: CRED_DISP pid=1799 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:27:11.792626 kernel: audit: type=1104 audit(1769452031.590:227): pid=1799 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:27:11.803950 systemd[1]: sshd@7-10.0.0.106:22-10.0.0.1:33466.service: Deactivated successfully. Jan 26 18:27:11.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.106:22-10.0.0.1:33466 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:27:11.808230 systemd[1]: session-9.scope: Deactivated successfully. Jan 26 18:27:11.811264 systemd-logind[1580]: Session 9 logged out. Waiting for processes to exit. Jan 26 18:27:11.811630 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 26 18:27:11.819043 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 26 18:27:11.821707 systemd[1]: Started sshd@8-10.0.0.106:22-10.0.0.1:33482.service - OpenSSH per-connection server daemon (10.0.0.1:33482). Jan 26 18:27:11.824955 systemd-logind[1580]: Removed session 9. Jan 26 18:27:11.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.106:22-10.0.0.1:33482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:27:11.836214 kernel: audit: type=1131 audit(1769452031.803:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.106:22-10.0.0.1:33466 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:27:11.922030 sshd[1839]: Accepted publickey for core from 10.0.0.1 port 33482 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:27:11.920000 audit[1839]: USER_ACCT pid=1839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:27:11.923000 audit[1839]: CRED_ACQ pid=1839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:27:11.925442 sshd-session[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:27:11.923000 audit[1839]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda3d28790 a2=3 a3=0 items=0 ppid=1 pid=1839 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:11.923000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:27:11.937154 systemd-logind[1580]: New session 10 of user core. Jan 26 18:27:11.948237 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 26 18:27:11.953000 audit[1839]: USER_START pid=1839 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:27:11.958000 audit[1845]: CRED_ACQ pid=1845 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:27:11.983000 audit[1846]: USER_ACCT pid=1846 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 26 18:27:11.984699 sudo[1846]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 26 18:27:11.984000 audit[1846]: CRED_REFR pid=1846 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 26 18:27:11.984000 audit[1846]: USER_START pid=1846 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 26 18:27:11.985495 sudo[1846]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 26 18:27:12.118399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 26 18:27:12.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:27:12.137347 (kubelet)[1862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 26 18:27:12.272204 kubelet[1862]: E0126 18:27:12.270051 1862 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 26 18:27:12.281349 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 26 18:27:12.281686 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 26 18:27:12.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 26 18:27:12.283014 systemd[1]: kubelet.service: Consumed 409ms CPU time, 109.8M memory peak. Jan 26 18:27:12.725552 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 26 18:27:12.741484 (dockerd)[1884]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 26 18:27:13.337482 dockerd[1884]: time="2026-01-26T18:27:13.337267392Z" level=info msg="Starting up" Jan 26 18:27:13.339735 dockerd[1884]: time="2026-01-26T18:27:13.339604996Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 26 18:27:13.377407 dockerd[1884]: time="2026-01-26T18:27:13.377263122Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 26 18:27:13.528508 dockerd[1884]: time="2026-01-26T18:27:13.528205008Z" level=info msg="Loading containers: start." Jan 26 18:27:13.556909 kernel: Initializing XFRM netlink socket Jan 26 18:27:13.781000 audit[1938]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:13.781000 audit[1938]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff27094940 a2=0 a3=0 items=0 ppid=1884 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:13.781000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 26 18:27:13.795000 audit[1940]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:13.795000 audit[1940]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe33b8d2a0 a2=0 a3=0 items=0 ppid=1884 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:13.795000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 26 18:27:13.807000 audit[1942]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:13.807000 audit[1942]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd18d254f0 a2=0 a3=0 items=0 ppid=1884 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:13.807000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 26 18:27:13.821000 audit[1944]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:13.821000 audit[1944]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdcb2fea00 a2=0 a3=0 items=0 ppid=1884 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:13.821000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 26 18:27:13.833000 audit[1946]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1946 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:13.833000 audit[1946]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe7fd32310 a2=0 a3=0 items=0 ppid=1884 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:13.833000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 26 18:27:13.847000 audit[1948]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1948 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:13.847000 audit[1948]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffa1a558c0 a2=0 a3=0 items=0 ppid=1884 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:13.847000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 26 18:27:13.858000 audit[1950]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1950 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:13.858000 audit[1950]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff55012120 a2=0 a3=0 items=0 ppid=1884 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:13.858000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 26 18:27:13.873000 audit[1952]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:13.873000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fffd32e9fa0 a2=0 a3=0 items=0 ppid=1884 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:13.873000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 26 18:27:13.963000 audit[1955]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1955 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:13.963000 audit[1955]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffec4ed3930 a2=0 a3=0 items=0 ppid=1884 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:13.963000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 26 18:27:13.975000 audit[1957]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:13.975000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd81d3eb70 a2=0 a3=0 items=0 ppid=1884 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:13.975000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 26 18:27:13.987000 audit[1959]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:13.987000 audit[1959]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff14c6c390 a2=0 a3=0 items=0 ppid=1884 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:13.987000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 26 18:27:13.998000 audit[1961]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:13.998000 audit[1961]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffccca03e70 a2=0 a3=0 items=0 ppid=1884 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:13.998000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 26 18:27:14.009000 audit[1963]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:14.009000 audit[1963]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffcb68e4e30 a2=0 a3=0 items=0 ppid=1884 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.009000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 26 18:27:14.205000 audit[1993]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:14.205000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff0dda68d0 a2=0 a3=0 items=0 ppid=1884 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.205000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 26 18:27:14.218000 audit[1995]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:14.218000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffff059fe00 a2=0 a3=0 items=0 ppid=1884 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.218000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 26 18:27:14.230000 audit[1997]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:14.230000 audit[1997]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff882665c0 a2=0 a3=0 items=0 ppid=1884 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.230000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 26 18:27:14.242000 audit[1999]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:14.242000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd32161370 a2=0 a3=0 items=0 ppid=1884 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.242000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 26 18:27:14.256000 audit[2001]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:14.256000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff697a72f0 a2=0 a3=0 items=0 ppid=1884 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.256000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 26 18:27:14.267000 audit[2003]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2003 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:14.267000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe1b8feda0 a2=0 a3=0 items=0 ppid=1884 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.267000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 26 18:27:14.278000 audit[2005]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:14.278000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffec087deb0 a2=0 a3=0 items=0 ppid=1884 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.278000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 26 18:27:14.291000 audit[2007]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:14.291000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe5fbe4010 a2=0 a3=0 items=0 ppid=1884 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.291000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 26 18:27:14.306000 audit[2009]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:14.306000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffdf9754810 a2=0 a3=0 items=0 ppid=1884 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.306000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 26 18:27:14.317000 audit[2011]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:14.317000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe94745ed0 a2=0 a3=0 items=0 ppid=1884 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.317000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 26 18:27:14.329000 audit[2013]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:14.329000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe22b9bb60 a2=0 a3=0 items=0 ppid=1884 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.329000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 26 18:27:14.342000 audit[2015]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:14.342000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff94f5e850 a2=0 a3=0 items=0 ppid=1884 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.342000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 26 18:27:14.355000 audit[2017]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2017 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:14.355000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe26fa3410 a2=0 a3=0 items=0 ppid=1884 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.355000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 26 18:27:14.388000 audit[2022]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:14.388000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff9247e4c0 a2=0 a3=0 items=0 ppid=1884 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.388000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 26 18:27:14.400000 audit[2024]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:14.400000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe8b3d4650 a2=0 a3=0 items=0 ppid=1884 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.400000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 26 18:27:14.413000 audit[2026]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:14.413000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc2bde01f0 a2=0 a3=0 items=0 ppid=1884 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.413000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 26 18:27:14.427000 audit[2028]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:14.427000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff65a36670 a2=0 a3=0 items=0 ppid=1884 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.427000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 26 18:27:14.441000 audit[2030]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2030 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:14.441000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc02984cc0 a2=0 a3=0 items=0 ppid=1884 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.441000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 26 18:27:14.456000 audit[2032]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2032 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:14.456000 audit[2032]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffec0475ac0 a2=0 a3=0 items=0 ppid=1884 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.456000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 26 18:27:14.512000 audit[2037]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:14.512000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fff07132f10 a2=0 a3=0 items=0 ppid=1884 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.512000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 26 18:27:14.522000 audit[2039]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:14.522000 audit[2039]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffeb0583640 a2=0 a3=0 items=0 ppid=1884 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.522000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 26 18:27:14.571000 audit[2047]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:14.571000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffc32889740 a2=0 a3=0 items=0 ppid=1884 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.571000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 26 18:27:14.616000 audit[2053]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:14.616000 audit[2053]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc4c45d580 a2=0 a3=0 items=0 ppid=1884 pid=2053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.616000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 26 18:27:14.627000 audit[2055]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:14.627000 audit[2055]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffc9ebf6070 a2=0 a3=0 items=0 ppid=1884 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.627000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 26 18:27:14.638000 audit[2057]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:14.638000 audit[2057]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff6afb47d0 a2=0 a3=0 items=0 ppid=1884 pid=2057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.638000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 26 18:27:14.649000 audit[2059]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:14.649000 audit[2059]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc08b7ba30 a2=0 a3=0 items=0 ppid=1884 pid=2059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.649000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 26 18:27:14.660000 audit[2061]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:14.660000 audit[2061]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd1bfd8d80 a2=0 a3=0 items=0 ppid=1884 pid=2061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:14.660000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 26 18:27:14.662968 systemd-networkd[1514]: docker0: Link UP Jan 26 18:27:14.676064 dockerd[1884]: time="2026-01-26T18:27:14.675689579Z" level=info msg="Loading containers: done." Jan 26 18:27:14.710177 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck415828892-merged.mount: Deactivated successfully. Jan 26 18:27:14.717448 dockerd[1884]: time="2026-01-26T18:27:14.717307117Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 26 18:27:14.717547 dockerd[1884]: time="2026-01-26T18:27:14.717466805Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 26 18:27:14.717581 dockerd[1884]: time="2026-01-26T18:27:14.717558566Z" level=info msg="Initializing buildkit" Jan 26 18:27:14.823520 dockerd[1884]: time="2026-01-26T18:27:14.822584979Z" level=info msg="Completed buildkit initialization" Jan 26 18:27:14.839493 dockerd[1884]: time="2026-01-26T18:27:14.838475227Z" level=info msg="Daemon has completed initialization" Jan 26 18:27:14.840973 dockerd[1884]: time="2026-01-26T18:27:14.840653609Z" level=info msg="API listen on /run/docker.sock" Jan 26 18:27:14.841159 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 26 18:27:14.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:27:16.151279 containerd[1602]: time="2026-01-26T18:27:16.150689123Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 26 18:27:17.000438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1281894134.mount: Deactivated successfully. Jan 26 18:27:20.095216 containerd[1602]: time="2026-01-26T18:27:20.094965416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:27:20.098194 containerd[1602]: time="2026-01-26T18:27:20.098045978Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28550692" Jan 26 18:27:20.101593 containerd[1602]: time="2026-01-26T18:27:20.101525865Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:27:20.111034 containerd[1602]: time="2026-01-26T18:27:20.110710255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:27:20.112269 containerd[1602]: time="2026-01-26T18:27:20.111972729Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 3.960991891s" Jan 26 18:27:20.112269 containerd[1602]: time="2026-01-26T18:27:20.112016511Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 26 18:27:20.114737 containerd[1602]: time="2026-01-26T18:27:20.114543599Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 26 18:27:22.412532 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 26 18:27:22.417250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 26 18:27:22.696048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 26 18:27:22.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:27:22.715266 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 26 18:27:22.715332 kernel: audit: type=1130 audit(1769452042.695:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:27:22.742536 (kubelet)[2173]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 26 18:27:22.898898 kubelet[2173]: E0126 18:27:22.898699 2173 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 26 18:27:22.904611 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 26 18:27:22.905479 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 26 18:27:22.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 26 18:27:22.907184 systemd[1]: kubelet.service: Consumed 412ms CPU time, 111.3M memory peak. Jan 26 18:27:22.936175 kernel: audit: type=1131 audit(1769452042.905:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 26 18:27:23.408040 containerd[1602]: time="2026-01-26T18:27:23.407447196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:27:23.410497 containerd[1602]: time="2026-01-26T18:27:23.410471850Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 26 18:27:23.414704 containerd[1602]: time="2026-01-26T18:27:23.414657968Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:27:23.420048 containerd[1602]: time="2026-01-26T18:27:23.420018345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:27:23.421933 containerd[1602]: time="2026-01-26T18:27:23.421220021Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 3.306647278s" Jan 26 18:27:23.421933 containerd[1602]: time="2026-01-26T18:27:23.421341738Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 26 18:27:23.423280 containerd[1602]: time="2026-01-26T18:27:23.422984540Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 26 18:27:25.787385 containerd[1602]: time="2026-01-26T18:27:25.787205465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:27:25.791047 containerd[1602]: time="2026-01-26T18:27:25.790611202Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Jan 26 18:27:25.794580 containerd[1602]: time="2026-01-26T18:27:25.794345409Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:27:25.801027 containerd[1602]: time="2026-01-26T18:27:25.800994219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:27:25.803258 containerd[1602]: time="2026-01-26T18:27:25.802648817Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 2.379305637s" Jan 26 18:27:25.803258 containerd[1602]: time="2026-01-26T18:27:25.803010522Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 26 18:27:25.804516 containerd[1602]: time="2026-01-26T18:27:25.804326538Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 26 18:27:27.456283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1702179329.mount: Deactivated successfully. Jan 26 18:27:29.124018 containerd[1602]: time="2026-01-26T18:27:29.123540136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:27:29.126193 containerd[1602]: time="2026-01-26T18:27:29.125996804Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Jan 26 18:27:29.130672 containerd[1602]: time="2026-01-26T18:27:29.130604897Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:27:29.137091 containerd[1602]: time="2026-01-26T18:27:29.136974587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:27:29.138477 containerd[1602]: time="2026-01-26T18:27:29.138254880Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 3.333887686s" Jan 26 18:27:29.138477 containerd[1602]: time="2026-01-26T18:27:29.138396344Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 26 18:27:29.140372 containerd[1602]: time="2026-01-26T18:27:29.140199680Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 26 18:27:29.820478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2803059582.mount: Deactivated successfully. Jan 26 18:27:31.974904 containerd[1602]: time="2026-01-26T18:27:31.974681315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:27:31.976175 containerd[1602]: time="2026-01-26T18:27:31.976151623Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128467" Jan 26 18:27:31.978731 containerd[1602]: time="2026-01-26T18:27:31.978673581Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:27:31.984424 containerd[1602]: time="2026-01-26T18:27:31.984380115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:27:31.985693 containerd[1602]: time="2026-01-26T18:27:31.985108155Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.844784473s" Jan 26 18:27:31.985693 containerd[1602]: time="2026-01-26T18:27:31.985214343Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 26 18:27:31.987100 containerd[1602]: time="2026-01-26T18:27:31.986680098Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 26 18:27:32.473008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1821388011.mount: Deactivated successfully. Jan 26 18:27:32.488644 containerd[1602]: time="2026-01-26T18:27:32.488282611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 26 18:27:32.490587 containerd[1602]: time="2026-01-26T18:27:32.490548732Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 26 18:27:32.493352 containerd[1602]: time="2026-01-26T18:27:32.493152542Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 26 18:27:32.498691 containerd[1602]: time="2026-01-26T18:27:32.498283249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 26 18:27:32.499506 containerd[1602]: time="2026-01-26T18:27:32.499197740Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 512.483638ms" Jan 26 18:27:32.499506 containerd[1602]: time="2026-01-26T18:27:32.499323120Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 26 18:27:32.501350 containerd[1602]: time="2026-01-26T18:27:32.501146943Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 26 18:27:32.911971 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 26 18:27:32.914636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 26 18:27:33.290685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2167081227.mount: Deactivated successfully. Jan 26 18:27:33.299406 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 26 18:27:33.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:27:33.324130 kernel: audit: type=1130 audit(1769452053.298:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:27:33.339547 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 26 18:27:33.458587 kubelet[2263]: E0126 18:27:33.458551 2263 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 26 18:27:33.463415 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 26 18:27:33.463681 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 26 18:27:33.464573 systemd[1]: kubelet.service: Consumed 380ms CPU time, 109.3M memory peak. Jan 26 18:27:33.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 26 18:27:33.503035 kernel: audit: type=1131 audit(1769452053.463:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 26 18:27:37.376026 containerd[1602]: time="2026-01-26T18:27:37.375144540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:27:37.377566 containerd[1602]: time="2026-01-26T18:27:37.377423443Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58264193" Jan 26 18:27:37.382193 containerd[1602]: time="2026-01-26T18:27:37.382044335Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:27:37.387712 containerd[1602]: time="2026-01-26T18:27:37.387402050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:27:37.388423 containerd[1602]: time="2026-01-26T18:27:37.388212061Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.886959327s" Jan 26 18:27:37.388423 containerd[1602]: time="2026-01-26T18:27:37.388428532Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 26 18:27:42.214675 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 26 18:27:42.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:27:42.215414 systemd[1]: kubelet.service: Consumed 380ms CPU time, 109.3M memory peak. Jan 26 18:27:42.220723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 26 18:27:42.241951 kernel: audit: type=1130 audit(1769452062.214:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:27:42.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:27:42.269009 kernel: audit: type=1131 audit(1769452062.214:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:27:42.295995 systemd[1]: Reload requested from client PID 2353 ('systemctl') (unit session-10.scope)... Jan 26 18:27:42.296105 systemd[1]: Reloading... Jan 26 18:27:42.453031 zram_generator::config[2400]: No configuration found. Jan 26 18:27:42.835489 systemd[1]: Reloading finished in 538 ms. Jan 26 18:27:42.872000 audit: BPF prog-id=63 op=LOAD Jan 26 18:27:42.883050 kernel: audit: type=1334 audit(1769452062.872:287): prog-id=63 op=LOAD Jan 26 18:27:42.872000 audit: BPF prog-id=64 op=LOAD Jan 26 18:27:42.874000 audit: BPF prog-id=47 op=UNLOAD Jan 26 18:27:42.874000 audit: BPF prog-id=48 op=UNLOAD Jan 26 18:27:42.876000 audit: BPF prog-id=65 op=LOAD Jan 26 18:27:42.876000 audit: BPF prog-id=46 op=UNLOAD Jan 26 18:27:42.878000 audit: BPF prog-id=66 op=LOAD Jan 26 18:27:42.878000 audit: BPF prog-id=58 op=UNLOAD Jan 26 18:27:42.880000 audit: BPF prog-id=67 op=LOAD Jan 26 18:27:42.880000 audit: BPF prog-id=49 op=UNLOAD Jan 26 18:27:42.880000 audit: BPF prog-id=68 op=LOAD Jan 26 18:27:42.895030 kernel: audit: type=1334 audit(1769452062.872:288): prog-id=64 op=LOAD Jan 26 18:27:42.895061 kernel: audit: type=1334 audit(1769452062.874:289): prog-id=47 op=UNLOAD Jan 26 18:27:42.895080 kernel: audit: type=1334 audit(1769452062.874:290): prog-id=48 op=UNLOAD Jan 26 18:27:42.895101 kernel: audit: type=1334 audit(1769452062.876:291): prog-id=65 op=LOAD Jan 26 18:27:42.895126 kernel: audit: type=1334 audit(1769452062.876:292): prog-id=46 op=UNLOAD Jan 26 18:27:42.895142 kernel: audit: type=1334 audit(1769452062.878:293): prog-id=66 op=LOAD Jan 26 18:27:42.895164 kernel: audit: type=1334 audit(1769452062.878:294): prog-id=58 op=UNLOAD Jan 26 18:27:42.880000 audit: BPF prog-id=69 op=LOAD Jan 26 18:27:42.880000 audit: BPF prog-id=50 op=UNLOAD Jan 26 18:27:42.880000 audit: BPF prog-id=51 op=UNLOAD Jan 26 18:27:42.883000 audit: BPF prog-id=70 op=LOAD Jan 26 18:27:42.883000 audit: BPF prog-id=55 op=UNLOAD Jan 26 18:27:42.883000 audit: BPF prog-id=71 op=LOAD Jan 26 18:27:42.883000 audit: BPF prog-id=72 op=LOAD Jan 26 18:27:42.883000 audit: BPF prog-id=56 op=UNLOAD Jan 26 18:27:42.883000 audit: BPF prog-id=57 op=UNLOAD Jan 26 18:27:42.884000 audit: BPF prog-id=73 op=LOAD Jan 26 18:27:42.884000 audit: BPF prog-id=43 op=UNLOAD Jan 26 18:27:42.884000 audit: BPF prog-id=74 op=LOAD Jan 26 18:27:42.884000 audit: BPF prog-id=75 op=LOAD Jan 26 18:27:42.884000 audit: BPF prog-id=44 op=UNLOAD Jan 26 18:27:42.885000 audit: BPF prog-id=45 op=UNLOAD Jan 26 18:27:42.886000 audit: BPF prog-id=76 op=LOAD Jan 26 18:27:42.886000 audit: BPF prog-id=59 op=UNLOAD Jan 26 18:27:42.892000 audit: BPF prog-id=77 op=LOAD Jan 26 18:27:42.892000 audit: BPF prog-id=60 op=UNLOAD Jan 26 18:27:42.893000 audit: BPF prog-id=78 op=LOAD Jan 26 18:27:42.893000 audit: BPF prog-id=79 op=LOAD Jan 26 18:27:42.893000 audit: BPF prog-id=61 op=UNLOAD Jan 26 18:27:42.893000 audit: BPF prog-id=62 op=UNLOAD Jan 26 18:27:42.895000 audit: BPF prog-id=80 op=LOAD Jan 26 18:27:42.895000 audit: BPF prog-id=52 op=UNLOAD Jan 26 18:27:42.896000 audit: BPF prog-id=81 op=LOAD Jan 26 18:27:42.896000 audit: BPF prog-id=82 op=LOAD Jan 26 18:27:42.896000 audit: BPF prog-id=53 op=UNLOAD Jan 26 18:27:42.896000 audit: BPF prog-id=54 op=UNLOAD Jan 26 18:27:42.953160 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 26 18:27:42.953349 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 26 18:27:42.954517 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 26 18:27:42.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 26 18:27:42.955064 systemd[1]: kubelet.service: Consumed 282ms CPU time, 98.5M memory peak. Jan 26 18:27:42.959257 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 26 18:27:43.277492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 26 18:27:43.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:27:43.302487 (kubelet)[2446]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 26 18:27:43.587558 kubelet[2446]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 18:27:43.587558 kubelet[2446]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 26 18:27:43.587558 kubelet[2446]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 18:27:43.592232 kubelet[2446]: I0126 18:27:43.590039 2446 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 18:27:44.063885 update_engine[1582]: I20260126 18:27:44.063613 1582 update_attempter.cc:509] Updating boot flags... Jan 26 18:27:44.257010 kubelet[2446]: I0126 18:27:44.256402 2446 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 26 18:27:44.257010 kubelet[2446]: I0126 18:27:44.256427 2446 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 18:27:44.257010 kubelet[2446]: I0126 18:27:44.256607 2446 server.go:956] "Client rotation is on, will bootstrap in background" Jan 26 18:27:44.303137 kubelet[2446]: I0126 18:27:44.303108 2446 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 26 18:27:44.307496 kubelet[2446]: E0126 18:27:44.307303 2446 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 26 18:27:44.354074 kubelet[2446]: I0126 18:27:44.351059 2446 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 18:27:44.390089 kubelet[2446]: I0126 18:27:44.389392 2446 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 26 18:27:44.390637 kubelet[2446]: I0126 18:27:44.390600 2446 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 18:27:44.391314 kubelet[2446]: I0126 18:27:44.390970 2446 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 18:27:44.391975 kubelet[2446]: I0126 18:27:44.391691 2446 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 18:27:44.391975 kubelet[2446]: I0126 18:27:44.391716 2446 container_manager_linux.go:303] "Creating device plugin manager" Jan 26 18:27:44.393591 kubelet[2446]: I0126 18:27:44.393570 2446 state_mem.go:36] "Initialized new in-memory state store" Jan 26 18:27:44.398374 kubelet[2446]: I0126 18:27:44.398350 2446 kubelet.go:480] "Attempting to sync node with API server" Jan 26 18:27:44.398590 kubelet[2446]: I0126 18:27:44.398449 2446 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 18:27:44.398590 kubelet[2446]: I0126 18:27:44.398489 2446 kubelet.go:386] "Adding apiserver pod source" Jan 26 18:27:44.398590 kubelet[2446]: I0126 18:27:44.398512 2446 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 18:27:44.414672 kubelet[2446]: E0126 18:27:44.414630 2446 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 18:27:44.423013 kubelet[2446]: E0126 18:27:44.422283 2446 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 18:27:44.429623 kubelet[2446]: I0126 18:27:44.429376 2446 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 26 18:27:44.431956 kubelet[2446]: I0126 18:27:44.431715 2446 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 26 18:27:44.434461 kubelet[2446]: W0126 18:27:44.434335 2446 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 26 18:27:44.450679 kubelet[2446]: I0126 18:27:44.450658 2446 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 26 18:27:44.452327 kubelet[2446]: I0126 18:27:44.452127 2446 server.go:1289] "Started kubelet" Jan 26 18:27:44.453533 kubelet[2446]: I0126 18:27:44.452929 2446 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 18:27:44.454023 kubelet[2446]: I0126 18:27:44.453995 2446 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 18:27:44.454460 kubelet[2446]: I0126 18:27:44.454441 2446 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 18:27:44.456337 kubelet[2446]: I0126 18:27:44.456320 2446 server.go:317] "Adding debug handlers to kubelet server" Jan 26 18:27:44.459660 kubelet[2446]: I0126 18:27:44.459023 2446 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 18:27:44.463105 kubelet[2446]: E0126 18:27:44.461701 2446 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.106:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188e5b4714aeaff9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-26 18:27:44.450981881 +0000 UTC m=+1.127397537,LastTimestamp:2026-01-26 18:27:44.450981881 +0000 UTC m=+1.127397537,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 26 18:27:44.466862 kubelet[2446]: I0126 18:27:44.465570 2446 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 26 18:27:44.466862 kubelet[2446]: E0126 18:27:44.466072 2446 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 26 18:27:44.466862 kubelet[2446]: I0126 18:27:44.461736 2446 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 26 18:27:44.469833 kubelet[2446]: I0126 18:27:44.469398 2446 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 26 18:27:44.469833 kubelet[2446]: I0126 18:27:44.469553 2446 reconciler.go:26] "Reconciler: start to sync state" Jan 26 18:27:44.470647 kubelet[2446]: E0126 18:27:44.470318 2446 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="200ms" Jan 26 18:27:44.471357 kubelet[2446]: I0126 18:27:44.470988 2446 factory.go:223] Registration of the systemd container factory successfully Jan 26 18:27:44.472404 kubelet[2446]: I0126 18:27:44.472126 2446 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 26 18:27:44.474623 kubelet[2446]: E0126 18:27:44.474078 2446 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 18:27:44.474623 kubelet[2446]: E0126 18:27:44.474398 2446 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 26 18:27:44.481699 kubelet[2446]: I0126 18:27:44.481575 2446 factory.go:223] Registration of the containerd container factory successfully Jan 26 18:27:44.499000 audit[2482]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2482 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:44.499000 audit[2482]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcda80bb40 a2=0 a3=0 items=0 ppid=2446 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:44.499000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 26 18:27:44.507000 audit[2483]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:44.507000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe82d3c060 a2=0 a3=0 items=0 ppid=2446 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:44.507000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 26 18:27:44.519000 audit[2486]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:44.519000 audit[2486]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffae571c80 a2=0 a3=0 items=0 ppid=2446 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:44.519000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 26 18:27:44.531725 kubelet[2446]: I0126 18:27:44.531480 2446 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 26 18:27:44.532500 kubelet[2446]: I0126 18:27:44.532304 2446 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 26 18:27:44.532500 kubelet[2446]: I0126 18:27:44.532433 2446 state_mem.go:36] "Initialized new in-memory state store" Jan 26 18:27:44.532000 audit[2489]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2489 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:44.532000 audit[2489]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc13550510 a2=0 a3=0 items=0 ppid=2446 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:44.532000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 26 18:27:44.538388 kubelet[2446]: I0126 18:27:44.538271 2446 policy_none.go:49] "None policy: Start" Jan 26 18:27:44.538481 kubelet[2446]: I0126 18:27:44.538390 2446 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 26 18:27:44.538481 kubelet[2446]: I0126 18:27:44.538407 2446 state_mem.go:35] "Initializing new in-memory state store" Jan 26 18:27:44.563282 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 26 18:27:44.567315 kubelet[2446]: E0126 18:27:44.567051 2446 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 26 18:27:44.566000 audit[2492]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:44.566000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fffaab0a520 a2=0 a3=0 items=0 ppid=2446 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:44.566000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 26 18:27:44.568721 kubelet[2446]: I0126 18:27:44.568264 2446 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 26 18:27:44.574000 audit[2494]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2494 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:44.574000 audit[2494]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd66673b30 a2=0 a3=0 items=0 ppid=2446 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:44.574000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 26 18:27:44.574000 audit[2493]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2493 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:44.574000 audit[2493]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcc2fad0d0 a2=0 a3=0 items=0 ppid=2446 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:44.574000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 26 18:27:44.576932 kubelet[2446]: I0126 18:27:44.575960 2446 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 26 18:27:44.576932 kubelet[2446]: I0126 18:27:44.575985 2446 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 26 18:27:44.576932 kubelet[2446]: I0126 18:27:44.576005 2446 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 26 18:27:44.576932 kubelet[2446]: I0126 18:27:44.576013 2446 kubelet.go:2436] "Starting kubelet main sync loop" Jan 26 18:27:44.576932 kubelet[2446]: E0126 18:27:44.576052 2446 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 18:27:44.576932 kubelet[2446]: E0126 18:27:44.576717 2446 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 26 18:27:44.580000 audit[2495]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:44.580000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe2cb3fe50 a2=0 a3=0 items=0 ppid=2446 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:44.580000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 26 18:27:44.581000 audit[2496]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2496 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:44.581000 audit[2496]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdec9813f0 a2=0 a3=0 items=0 ppid=2446 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:44.581000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 26 18:27:44.587000 audit[2497]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:44.587000 audit[2497]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd4dd67fc0 a2=0 a3=0 items=0 ppid=2446 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:44.587000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 26 18:27:44.589137 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 26 18:27:44.589000 audit[2498]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2498 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:44.589000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd302989f0 a2=0 a3=0 items=0 ppid=2446 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:44.589000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 26 18:27:44.599123 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 26 18:27:44.599000 audit[2499]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2499 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:44.599000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc100c5f00 a2=0 a3=0 items=0 ppid=2446 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:44.599000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 26 18:27:44.616034 kubelet[2446]: E0126 18:27:44.612555 2446 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 26 18:27:44.616034 kubelet[2446]: I0126 18:27:44.613006 2446 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 18:27:44.616034 kubelet[2446]: I0126 18:27:44.613016 2446 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 18:27:44.616034 kubelet[2446]: I0126 18:27:44.614114 2446 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 18:27:44.618383 kubelet[2446]: E0126 18:27:44.618318 2446 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 26 18:27:44.618383 kubelet[2446]: E0126 18:27:44.618362 2446 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 26 18:27:44.672119 kubelet[2446]: E0126 18:27:44.672066 2446 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="400ms" Jan 26 18:27:44.707735 systemd[1]: Created slice kubepods-burstable-pod50bbe4c81f3b3c758ec96669c31e251c.slice - libcontainer container kubepods-burstable-pod50bbe4c81f3b3c758ec96669c31e251c.slice. Jan 26 18:27:44.716982 kubelet[2446]: I0126 18:27:44.716724 2446 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 26 18:27:44.717658 kubelet[2446]: E0126 18:27:44.717633 2446 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Jan 26 18:27:44.730004 kubelet[2446]: E0126 18:27:44.729688 2446 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 26 18:27:44.739992 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 26 18:27:44.744349 kubelet[2446]: E0126 18:27:44.744066 2446 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 26 18:27:44.758550 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 26 18:27:44.763677 kubelet[2446]: E0126 18:27:44.763551 2446 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 26 18:27:44.770651 kubelet[2446]: I0126 18:27:44.770624 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 26 18:27:44.771385 kubelet[2446]: I0126 18:27:44.771062 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 26 18:27:44.771385 kubelet[2446]: I0126 18:27:44.771089 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 26 18:27:44.771385 kubelet[2446]: I0126 18:27:44.771111 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 26 18:27:44.771385 kubelet[2446]: I0126 18:27:44.771248 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50bbe4c81f3b3c758ec96669c31e251c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"50bbe4c81f3b3c758ec96669c31e251c\") " pod="kube-system/kube-apiserver-localhost" Jan 26 18:27:44.771385 kubelet[2446]: I0126 18:27:44.771274 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50bbe4c81f3b3c758ec96669c31e251c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"50bbe4c81f3b3c758ec96669c31e251c\") " pod="kube-system/kube-apiserver-localhost" Jan 26 18:27:44.771565 kubelet[2446]: I0126 18:27:44.771297 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50bbe4c81f3b3c758ec96669c31e251c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"50bbe4c81f3b3c758ec96669c31e251c\") " pod="kube-system/kube-apiserver-localhost" Jan 26 18:27:44.771565 kubelet[2446]: I0126 18:27:44.771319 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 26 18:27:44.771565 kubelet[2446]: I0126 18:27:44.771339 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 26 18:27:44.923725 kubelet[2446]: I0126 18:27:44.923382 2446 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 26 18:27:44.923725 kubelet[2446]: E0126 18:27:44.923689 2446 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Jan 26 18:27:45.032080 kubelet[2446]: E0126 18:27:45.031703 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:45.035492 containerd[1602]: time="2026-01-26T18:27:45.034544539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:50bbe4c81f3b3c758ec96669c31e251c,Namespace:kube-system,Attempt:0,}" Jan 26 18:27:45.046516 kubelet[2446]: E0126 18:27:45.046028 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:45.047569 containerd[1602]: time="2026-01-26T18:27:45.046968631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 26 18:27:45.065252 kubelet[2446]: E0126 18:27:45.064931 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:45.065572 containerd[1602]: time="2026-01-26T18:27:45.065443099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 26 18:27:45.074928 kubelet[2446]: E0126 18:27:45.074549 2446 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="800ms" Jan 26 18:27:45.089703 kubelet[2446]: E0126 18:27:45.089466 2446 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.106:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188e5b4714aeaff9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-26 18:27:44.450981881 +0000 UTC m=+1.127397537,LastTimestamp:2026-01-26 18:27:44.450981881 +0000 UTC m=+1.127397537,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 26 18:27:45.141502 containerd[1602]: time="2026-01-26T18:27:45.141359997Z" level=info msg="connecting to shim 7d3cecd86f62daaf3143f35e8cf98ffc85ec2ef6ccc24ea9a80e112dcaec84ae" address="unix:///run/containerd/s/378914bc3d56d7799000237f259c65aa3fd6d6c0a3c8f9262c3d482a5842575a" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:27:45.189622 containerd[1602]: time="2026-01-26T18:27:45.188218404Z" level=info msg="connecting to shim 52cfc55160a72fd20c72c0551f67ca4d769a7858a926afa5f985e5350f11a77b" address="unix:///run/containerd/s/7895f6fba926b3cf2cc1d634e5c9002e60e238085548fca547eb38dc569b73cf" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:27:45.199386 containerd[1602]: time="2026-01-26T18:27:45.199354849Z" level=info msg="connecting to shim e521e3cfc2cc5fda9121c1116f84edcd3aa1866158538932a3a548c9bda42bb0" address="unix:///run/containerd/s/4b489b5bd8ae0c08cac017c52fca6c7221208ce45e3dbd653c757a16a59f1eff" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:27:45.262259 systemd[1]: Started cri-containerd-7d3cecd86f62daaf3143f35e8cf98ffc85ec2ef6ccc24ea9a80e112dcaec84ae.scope - libcontainer container 7d3cecd86f62daaf3143f35e8cf98ffc85ec2ef6ccc24ea9a80e112dcaec84ae. Jan 26 18:27:45.303055 systemd[1]: Started cri-containerd-52cfc55160a72fd20c72c0551f67ca4d769a7858a926afa5f985e5350f11a77b.scope - libcontainer container 52cfc55160a72fd20c72c0551f67ca4d769a7858a926afa5f985e5350f11a77b. Jan 26 18:27:45.333024 kubelet[2446]: I0126 18:27:45.332994 2446 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 26 18:27:45.335529 kubelet[2446]: E0126 18:27:45.335478 2446 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Jan 26 18:27:45.343374 systemd[1]: Started cri-containerd-e521e3cfc2cc5fda9121c1116f84edcd3aa1866158538932a3a548c9bda42bb0.scope - libcontainer container e521e3cfc2cc5fda9121c1116f84edcd3aa1866158538932a3a548c9bda42bb0. Jan 26 18:27:45.347000 audit: BPF prog-id=83 op=LOAD Jan 26 18:27:45.361000 audit: BPF prog-id=84 op=LOAD Jan 26 18:27:45.361000 audit[2529]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2508 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764336365636438366636326461616633313433663335653863663938 Jan 26 18:27:45.362000 audit: BPF prog-id=84 op=UNLOAD Jan 26 18:27:45.362000 audit[2529]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2508 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.362000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764336365636438366636326461616633313433663335653863663938 Jan 26 18:27:45.364652 kubelet[2446]: E0126 18:27:45.364629 2446 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 18:27:45.366000 audit: BPF prog-id=85 op=LOAD Jan 26 18:27:45.366000 audit[2529]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2508 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764336365636438366636326461616633313433663335653863663938 Jan 26 18:27:45.369000 audit: BPF prog-id=86 op=LOAD Jan 26 18:27:45.369000 audit[2529]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2508 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764336365636438366636326461616633313433663335653863663938 Jan 26 18:27:45.369000 audit: BPF prog-id=86 op=UNLOAD Jan 26 18:27:45.369000 audit[2529]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2508 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764336365636438366636326461616633313433663335653863663938 Jan 26 18:27:45.369000 audit: BPF prog-id=85 op=UNLOAD Jan 26 18:27:45.369000 audit[2529]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2508 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764336365636438366636326461616633313433663335653863663938 Jan 26 18:27:45.369000 audit: BPF prog-id=87 op=LOAD Jan 26 18:27:45.369000 audit: BPF prog-id=88 op=LOAD Jan 26 18:27:45.369000 audit[2529]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2508 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764336365636438366636326461616633313433663335653863663938 Jan 26 18:27:45.372000 audit: BPF prog-id=89 op=LOAD Jan 26 18:27:45.372000 audit[2567]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2528 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532636663353531363061373266643230633732633035353166363763 Jan 26 18:27:45.372000 audit: BPF prog-id=89 op=UNLOAD Jan 26 18:27:45.372000 audit[2567]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2528 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532636663353531363061373266643230633732633035353166363763 Jan 26 18:27:45.372000 audit: BPF prog-id=90 op=LOAD Jan 26 18:27:45.372000 audit[2567]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2528 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532636663353531363061373266643230633732633035353166363763 Jan 26 18:27:45.372000 audit: BPF prog-id=91 op=LOAD Jan 26 18:27:45.372000 audit[2567]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2528 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532636663353531363061373266643230633732633035353166363763 Jan 26 18:27:45.372000 audit: BPF prog-id=91 op=UNLOAD Jan 26 18:27:45.372000 audit[2567]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2528 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532636663353531363061373266643230633732633035353166363763 Jan 26 18:27:45.372000 audit: BPF prog-id=90 op=UNLOAD Jan 26 18:27:45.372000 audit[2567]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2528 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532636663353531363061373266643230633732633035353166363763 Jan 26 18:27:45.373000 audit: BPF prog-id=92 op=LOAD Jan 26 18:27:45.373000 audit[2567]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2528 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532636663353531363061373266643230633732633035353166363763 Jan 26 18:27:45.385948 kubelet[2446]: E0126 18:27:45.385504 2446 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 18:27:45.416000 audit: BPF prog-id=93 op=LOAD Jan 26 18:27:45.418000 audit: BPF prog-id=94 op=LOAD Jan 26 18:27:45.418000 audit[2581]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2536 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535323165336366633263633566646139313231633131313666383465 Jan 26 18:27:45.418000 audit: BPF prog-id=94 op=UNLOAD Jan 26 18:27:45.418000 audit[2581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2536 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535323165336366633263633566646139313231633131313666383465 Jan 26 18:27:45.420000 audit: BPF prog-id=95 op=LOAD Jan 26 18:27:45.420000 audit[2581]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2536 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535323165336366633263633566646139313231633131313666383465 Jan 26 18:27:45.420000 audit: BPF prog-id=96 op=LOAD Jan 26 18:27:45.420000 audit[2581]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2536 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535323165336366633263633566646139313231633131313666383465 Jan 26 18:27:45.420000 audit: BPF prog-id=96 op=UNLOAD Jan 26 18:27:45.420000 audit[2581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2536 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535323165336366633263633566646139313231633131313666383465 Jan 26 18:27:45.420000 audit: BPF prog-id=95 op=UNLOAD Jan 26 18:27:45.420000 audit[2581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2536 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535323165336366633263633566646139313231633131313666383465 Jan 26 18:27:45.420000 audit: BPF prog-id=97 op=LOAD Jan 26 18:27:45.420000 audit[2581]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2536 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535323165336366633263633566646139313231633131313666383465 Jan 26 18:27:45.491550 containerd[1602]: time="2026-01-26T18:27:45.490428890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:50bbe4c81f3b3c758ec96669c31e251c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d3cecd86f62daaf3143f35e8cf98ffc85ec2ef6ccc24ea9a80e112dcaec84ae\"" Jan 26 18:27:45.504209 kubelet[2446]: E0126 18:27:45.503581 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:45.513693 containerd[1602]: time="2026-01-26T18:27:45.513279812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"52cfc55160a72fd20c72c0551f67ca4d769a7858a926afa5f985e5350f11a77b\"" Jan 26 18:27:45.516334 kubelet[2446]: E0126 18:27:45.516316 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:45.521520 containerd[1602]: time="2026-01-26T18:27:45.521270094Z" level=info msg="CreateContainer within sandbox \"7d3cecd86f62daaf3143f35e8cf98ffc85ec2ef6ccc24ea9a80e112dcaec84ae\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 26 18:27:45.530274 containerd[1602]: time="2026-01-26T18:27:45.529624765Z" level=info msg="CreateContainer within sandbox \"52cfc55160a72fd20c72c0551f67ca4d769a7858a926afa5f985e5350f11a77b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 26 18:27:45.542730 containerd[1602]: time="2026-01-26T18:27:45.541539955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e521e3cfc2cc5fda9121c1116f84edcd3aa1866158538932a3a548c9bda42bb0\"" Jan 26 18:27:45.548314 kubelet[2446]: E0126 18:27:45.548017 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:45.560319 containerd[1602]: time="2026-01-26T18:27:45.559738110Z" level=info msg="CreateContainer within sandbox \"e521e3cfc2cc5fda9121c1116f84edcd3aa1866158538932a3a548c9bda42bb0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 26 18:27:45.562592 containerd[1602]: time="2026-01-26T18:27:45.562369656Z" level=info msg="Container a348dedb6af17c0ec53051edd82e9ade1ba6a9df12d0f4e0a57efd60789d3f67: CDI devices from CRI Config.CDIDevices: []" Jan 26 18:27:45.572272 containerd[1602]: time="2026-01-26T18:27:45.571503456Z" level=info msg="Container 5ab767469a41edd033b6fb31c810fa2de708cfb8c0c2af2757ebed5290163779: CDI devices from CRI Config.CDIDevices: []" Jan 26 18:27:45.591943 containerd[1602]: time="2026-01-26T18:27:45.591706672Z" level=info msg="CreateContainer within sandbox \"52cfc55160a72fd20c72c0551f67ca4d769a7858a926afa5f985e5350f11a77b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a348dedb6af17c0ec53051edd82e9ade1ba6a9df12d0f4e0a57efd60789d3f67\"" Jan 26 18:27:45.594909 containerd[1602]: time="2026-01-26T18:27:45.594659121Z" level=info msg="StartContainer for \"a348dedb6af17c0ec53051edd82e9ade1ba6a9df12d0f4e0a57efd60789d3f67\"" Jan 26 18:27:45.598572 containerd[1602]: time="2026-01-26T18:27:45.598421436Z" level=info msg="connecting to shim a348dedb6af17c0ec53051edd82e9ade1ba6a9df12d0f4e0a57efd60789d3f67" address="unix:///run/containerd/s/7895f6fba926b3cf2cc1d634e5c9002e60e238085548fca547eb38dc569b73cf" protocol=ttrpc version=3 Jan 26 18:27:45.611200 containerd[1602]: time="2026-01-26T18:27:45.610515638Z" level=info msg="CreateContainer within sandbox \"7d3cecd86f62daaf3143f35e8cf98ffc85ec2ef6ccc24ea9a80e112dcaec84ae\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5ab767469a41edd033b6fb31c810fa2de708cfb8c0c2af2757ebed5290163779\"" Jan 26 18:27:45.615620 containerd[1602]: time="2026-01-26T18:27:45.615220626Z" level=info msg="StartContainer for \"5ab767469a41edd033b6fb31c810fa2de708cfb8c0c2af2757ebed5290163779\"" Jan 26 18:27:45.617379 containerd[1602]: time="2026-01-26T18:27:45.616679540Z" level=info msg="connecting to shim 5ab767469a41edd033b6fb31c810fa2de708cfb8c0c2af2757ebed5290163779" address="unix:///run/containerd/s/378914bc3d56d7799000237f259c65aa3fd6d6c0a3c8f9262c3d482a5842575a" protocol=ttrpc version=3 Jan 26 18:27:45.634658 containerd[1602]: time="2026-01-26T18:27:45.634156292Z" level=info msg="Container fdbacede0c1ab864d7370c61dd7712330179d61810cb9246516c0e2a589362bc: CDI devices from CRI Config.CDIDevices: []" Jan 26 18:27:45.674360 containerd[1602]: time="2026-01-26T18:27:45.673465921Z" level=info msg="CreateContainer within sandbox \"e521e3cfc2cc5fda9121c1116f84edcd3aa1866158538932a3a548c9bda42bb0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fdbacede0c1ab864d7370c61dd7712330179d61810cb9246516c0e2a589362bc\"" Jan 26 18:27:45.678368 containerd[1602]: time="2026-01-26T18:27:45.678334967Z" level=info msg="StartContainer for \"fdbacede0c1ab864d7370c61dd7712330179d61810cb9246516c0e2a589362bc\"" Jan 26 18:27:45.685367 containerd[1602]: time="2026-01-26T18:27:45.684530090Z" level=info msg="connecting to shim fdbacede0c1ab864d7370c61dd7712330179d61810cb9246516c0e2a589362bc" address="unix:///run/containerd/s/4b489b5bd8ae0c08cac017c52fca6c7221208ce45e3dbd653c757a16a59f1eff" protocol=ttrpc version=3 Jan 26 18:27:45.696540 systemd[1]: Started cri-containerd-5ab767469a41edd033b6fb31c810fa2de708cfb8c0c2af2757ebed5290163779.scope - libcontainer container 5ab767469a41edd033b6fb31c810fa2de708cfb8c0c2af2757ebed5290163779. Jan 26 18:27:45.722391 systemd[1]: Started cri-containerd-a348dedb6af17c0ec53051edd82e9ade1ba6a9df12d0f4e0a57efd60789d3f67.scope - libcontainer container a348dedb6af17c0ec53051edd82e9ade1ba6a9df12d0f4e0a57efd60789d3f67. Jan 26 18:27:45.759000 audit: BPF prog-id=98 op=LOAD Jan 26 18:27:45.761000 audit: BPF prog-id=99 op=LOAD Jan 26 18:27:45.761000 audit[2642]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2508 pid=2642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561623736373436396134316564643033336236666233316338313066 Jan 26 18:27:45.762000 audit: BPF prog-id=99 op=UNLOAD Jan 26 18:27:45.762000 audit[2642]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2508 pid=2642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561623736373436396134316564643033336236666233316338313066 Jan 26 18:27:45.762000 audit: BPF prog-id=100 op=LOAD Jan 26 18:27:45.762000 audit[2642]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2508 pid=2642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561623736373436396134316564643033336236666233316338313066 Jan 26 18:27:45.762000 audit: BPF prog-id=101 op=LOAD Jan 26 18:27:45.762000 audit[2642]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2508 pid=2642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561623736373436396134316564643033336236666233316338313066 Jan 26 18:27:45.764000 audit: BPF prog-id=101 op=UNLOAD Jan 26 18:27:45.764000 audit[2642]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2508 pid=2642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561623736373436396134316564643033336236666233316338313066 Jan 26 18:27:45.764000 audit: BPF prog-id=100 op=UNLOAD Jan 26 18:27:45.764000 audit[2642]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2508 pid=2642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561623736373436396134316564643033336236666233316338313066 Jan 26 18:27:45.764000 audit: BPF prog-id=102 op=LOAD Jan 26 18:27:45.764000 audit[2642]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2508 pid=2642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561623736373436396134316564643033336236666233316338313066 Jan 26 18:27:45.779399 systemd[1]: Started cri-containerd-fdbacede0c1ab864d7370c61dd7712330179d61810cb9246516c0e2a589362bc.scope - libcontainer container fdbacede0c1ab864d7370c61dd7712330179d61810cb9246516c0e2a589362bc. Jan 26 18:27:45.787000 audit: BPF prog-id=103 op=LOAD Jan 26 18:27:45.790000 audit: BPF prog-id=104 op=LOAD Jan 26 18:27:45.790000 audit[2640]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2528 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133343864656462366166313763306563353330353165646438326539 Jan 26 18:27:45.793000 audit: BPF prog-id=104 op=UNLOAD Jan 26 18:27:45.793000 audit[2640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2528 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133343864656462366166313763306563353330353165646438326539 Jan 26 18:27:45.795000 audit: BPF prog-id=105 op=LOAD Jan 26 18:27:45.795000 audit[2640]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2528 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133343864656462366166313763306563353330353165646438326539 Jan 26 18:27:45.797000 audit: BPF prog-id=106 op=LOAD Jan 26 18:27:45.797000 audit[2640]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2528 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133343864656462366166313763306563353330353165646438326539 Jan 26 18:27:45.797000 audit: BPF prog-id=106 op=UNLOAD Jan 26 18:27:45.797000 audit[2640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2528 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133343864656462366166313763306563353330353165646438326539 Jan 26 18:27:45.797000 audit: BPF prog-id=105 op=UNLOAD Jan 26 18:27:45.797000 audit[2640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2528 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133343864656462366166313763306563353330353165646438326539 Jan 26 18:27:45.797000 audit: BPF prog-id=107 op=LOAD Jan 26 18:27:45.797000 audit[2640]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2528 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133343864656462366166313763306563353330353165646438326539 Jan 26 18:27:45.825000 audit: BPF prog-id=108 op=LOAD Jan 26 18:27:45.827000 audit: BPF prog-id=109 op=LOAD Jan 26 18:27:45.827000 audit[2665]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2536 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.827000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664626163656465306331616238363464373337306336316464373731 Jan 26 18:27:45.827000 audit: BPF prog-id=109 op=UNLOAD Jan 26 18:27:45.827000 audit[2665]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2536 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.827000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664626163656465306331616238363464373337306336316464373731 Jan 26 18:27:45.828000 audit: BPF prog-id=110 op=LOAD Jan 26 18:27:45.828000 audit[2665]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2536 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664626163656465306331616238363464373337306336316464373731 Jan 26 18:27:45.828000 audit: BPF prog-id=111 op=LOAD Jan 26 18:27:45.828000 audit[2665]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2536 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664626163656465306331616238363464373337306336316464373731 Jan 26 18:27:45.828000 audit: BPF prog-id=111 op=UNLOAD Jan 26 18:27:45.828000 audit[2665]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2536 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664626163656465306331616238363464373337306336316464373731 Jan 26 18:27:45.828000 audit: BPF prog-id=110 op=UNLOAD Jan 26 18:27:45.828000 audit[2665]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2536 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664626163656465306331616238363464373337306336316464373731 Jan 26 18:27:45.829000 audit: BPF prog-id=112 op=LOAD Jan 26 18:27:45.829000 audit[2665]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2536 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:45.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664626163656465306331616238363464373337306336316464373731 Jan 26 18:27:45.843204 kubelet[2446]: E0126 18:27:45.841520 2446 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 18:27:45.880432 kubelet[2446]: E0126 18:27:45.879489 2446 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="1.6s" Jan 26 18:27:45.890540 containerd[1602]: time="2026-01-26T18:27:45.889523317Z" level=info msg="StartContainer for \"5ab767469a41edd033b6fb31c810fa2de708cfb8c0c2af2757ebed5290163779\" returns successfully" Jan 26 18:27:45.964855 containerd[1602]: time="2026-01-26T18:27:45.964709569Z" level=info msg="StartContainer for \"a348dedb6af17c0ec53051edd82e9ade1ba6a9df12d0f4e0a57efd60789d3f67\" returns successfully" Jan 26 18:27:46.002956 containerd[1602]: time="2026-01-26T18:27:46.000172403Z" level=info msg="StartContainer for \"fdbacede0c1ab864d7370c61dd7712330179d61810cb9246516c0e2a589362bc\" returns successfully" Jan 26 18:27:46.140617 kubelet[2446]: I0126 18:27:46.140302 2446 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 26 18:27:46.636344 kubelet[2446]: E0126 18:27:46.635663 2446 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 26 18:27:46.636344 kubelet[2446]: E0126 18:27:46.636322 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:46.653175 kubelet[2446]: E0126 18:27:46.651238 2446 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 26 18:27:46.653175 kubelet[2446]: E0126 18:27:46.651337 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:46.654507 kubelet[2446]: E0126 18:27:46.642403 2446 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 26 18:27:46.654507 kubelet[2446]: E0126 18:27:46.654228 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:47.654626 kubelet[2446]: E0126 18:27:47.654336 2446 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 26 18:27:47.654626 kubelet[2446]: E0126 18:27:47.654535 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:47.660216 kubelet[2446]: E0126 18:27:47.659252 2446 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 26 18:27:47.660216 kubelet[2446]: E0126 18:27:47.659348 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:47.661060 kubelet[2446]: E0126 18:27:47.660609 2446 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 26 18:27:47.661670 kubelet[2446]: E0126 18:27:47.661656 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:48.661271 kubelet[2446]: E0126 18:27:48.661241 2446 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 26 18:27:48.663245 kubelet[2446]: E0126 18:27:48.661347 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:48.663245 kubelet[2446]: E0126 18:27:48.662289 2446 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 26 18:27:48.663245 kubelet[2446]: E0126 18:27:48.662366 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:48.828436 kubelet[2446]: E0126 18:27:48.827222 2446 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 26 18:27:49.020140 kubelet[2446]: I0126 18:27:49.019241 2446 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 26 18:27:49.070211 kubelet[2446]: I0126 18:27:49.069059 2446 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 26 18:27:49.107430 kubelet[2446]: E0126 18:27:49.107200 2446 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 26 18:27:49.107430 kubelet[2446]: I0126 18:27:49.107234 2446 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 26 18:27:49.112287 kubelet[2446]: E0126 18:27:49.112255 2446 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 26 18:27:49.112394 kubelet[2446]: I0126 18:27:49.112382 2446 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 26 18:27:49.118107 kubelet[2446]: E0126 18:27:49.118067 2446 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 26 18:27:49.415054 kubelet[2446]: I0126 18:27:49.414302 2446 apiserver.go:52] "Watching apiserver" Jan 26 18:27:49.470866 kubelet[2446]: I0126 18:27:49.470493 2446 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 26 18:27:50.030044 kubelet[2446]: I0126 18:27:50.029483 2446 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 26 18:27:50.040443 kubelet[2446]: E0126 18:27:50.040421 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:50.469382 kubelet[2446]: I0126 18:27:50.468734 2446 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 26 18:27:50.480411 kubelet[2446]: E0126 18:27:50.480323 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:50.669085 kubelet[2446]: E0126 18:27:50.669056 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:50.673383 kubelet[2446]: E0126 18:27:50.673301 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:51.820684 systemd[1]: Reload requested from client PID 2747 ('systemctl') (unit session-10.scope)... Jan 26 18:27:51.821255 systemd[1]: Reloading... Jan 26 18:27:51.996053 zram_generator::config[2789]: No configuration found. Jan 26 18:27:52.375204 systemd[1]: Reloading finished in 553 ms. Jan 26 18:27:52.451267 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 26 18:27:52.480718 systemd[1]: kubelet.service: Deactivated successfully. Jan 26 18:27:52.482327 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 26 18:27:52.483074 systemd[1]: kubelet.service: Consumed 2.485s CPU time, 131M memory peak. Jan 26 18:27:52.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:27:52.486321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 26 18:27:52.512218 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 26 18:27:52.512287 kernel: audit: type=1131 audit(1769452072.481:389): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:27:52.482000 audit: BPF prog-id=113 op=LOAD Jan 26 18:27:52.482000 audit: BPF prog-id=76 op=UNLOAD Jan 26 18:27:52.547131 kernel: audit: type=1334 audit(1769452072.482:390): prog-id=113 op=LOAD Jan 26 18:27:52.547185 kernel: audit: type=1334 audit(1769452072.482:391): prog-id=76 op=UNLOAD Jan 26 18:27:52.547211 kernel: audit: type=1334 audit(1769452072.489:392): prog-id=114 op=LOAD Jan 26 18:27:52.489000 audit: BPF prog-id=114 op=LOAD Jan 26 18:27:52.559552 kernel: audit: type=1334 audit(1769452072.489:393): prog-id=67 op=UNLOAD Jan 26 18:27:52.489000 audit: BPF prog-id=67 op=UNLOAD Jan 26 18:27:52.489000 audit: BPF prog-id=115 op=LOAD Jan 26 18:27:52.578356 kernel: audit: type=1334 audit(1769452072.489:394): prog-id=115 op=LOAD Jan 26 18:27:52.578525 kernel: audit: type=1334 audit(1769452072.489:395): prog-id=116 op=LOAD Jan 26 18:27:52.489000 audit: BPF prog-id=116 op=LOAD Jan 26 18:27:52.589082 kernel: audit: type=1334 audit(1769452072.489:396): prog-id=68 op=UNLOAD Jan 26 18:27:52.489000 audit: BPF prog-id=68 op=UNLOAD Jan 26 18:27:52.489000 audit: BPF prog-id=69 op=UNLOAD Jan 26 18:27:52.610633 kernel: audit: type=1334 audit(1769452072.489:397): prog-id=69 op=UNLOAD Jan 26 18:27:52.610705 kernel: audit: type=1334 audit(1769452072.490:398): prog-id=117 op=LOAD Jan 26 18:27:52.490000 audit: BPF prog-id=117 op=LOAD Jan 26 18:27:52.490000 audit: BPF prog-id=77 op=UNLOAD Jan 26 18:27:52.490000 audit: BPF prog-id=118 op=LOAD Jan 26 18:27:52.490000 audit: BPF prog-id=119 op=LOAD Jan 26 18:27:52.490000 audit: BPF prog-id=78 op=UNLOAD Jan 26 18:27:52.490000 audit: BPF prog-id=79 op=UNLOAD Jan 26 18:27:52.490000 audit: BPF prog-id=120 op=LOAD Jan 26 18:27:52.490000 audit: BPF prog-id=65 op=UNLOAD Jan 26 18:27:52.490000 audit: BPF prog-id=121 op=LOAD Jan 26 18:27:52.490000 audit: BPF prog-id=66 op=UNLOAD Jan 26 18:27:52.496000 audit: BPF prog-id=122 op=LOAD Jan 26 18:27:52.496000 audit: BPF prog-id=80 op=UNLOAD Jan 26 18:27:52.496000 audit: BPF prog-id=123 op=LOAD Jan 26 18:27:52.496000 audit: BPF prog-id=124 op=LOAD Jan 26 18:27:52.496000 audit: BPF prog-id=81 op=UNLOAD Jan 26 18:27:52.496000 audit: BPF prog-id=82 op=UNLOAD Jan 26 18:27:52.496000 audit: BPF prog-id=125 op=LOAD Jan 26 18:27:52.496000 audit: BPF prog-id=70 op=UNLOAD Jan 26 18:27:52.496000 audit: BPF prog-id=126 op=LOAD Jan 26 18:27:52.496000 audit: BPF prog-id=127 op=LOAD Jan 26 18:27:52.496000 audit: BPF prog-id=71 op=UNLOAD Jan 26 18:27:52.496000 audit: BPF prog-id=72 op=UNLOAD Jan 26 18:27:52.499000 audit: BPF prog-id=128 op=LOAD Jan 26 18:27:52.499000 audit: BPF prog-id=73 op=UNLOAD Jan 26 18:27:52.499000 audit: BPF prog-id=129 op=LOAD Jan 26 18:27:52.499000 audit: BPF prog-id=130 op=LOAD Jan 26 18:27:52.499000 audit: BPF prog-id=74 op=UNLOAD Jan 26 18:27:52.499000 audit: BPF prog-id=75 op=UNLOAD Jan 26 18:27:52.499000 audit: BPF prog-id=131 op=LOAD Jan 26 18:27:52.499000 audit: BPF prog-id=132 op=LOAD Jan 26 18:27:52.499000 audit: BPF prog-id=63 op=UNLOAD Jan 26 18:27:52.499000 audit: BPF prog-id=64 op=UNLOAD Jan 26 18:27:52.876065 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 26 18:27:52.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:27:52.895504 (kubelet)[2838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 26 18:27:53.105598 kubelet[2838]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 18:27:53.105598 kubelet[2838]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 26 18:27:53.105598 kubelet[2838]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 18:27:53.108127 kubelet[2838]: I0126 18:27:53.105652 2838 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 18:27:53.148131 kubelet[2838]: I0126 18:27:53.147525 2838 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 26 18:27:53.148131 kubelet[2838]: I0126 18:27:53.147566 2838 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 18:27:53.149601 kubelet[2838]: I0126 18:27:53.149584 2838 server.go:956] "Client rotation is on, will bootstrap in background" Jan 26 18:27:53.154192 kubelet[2838]: I0126 18:27:53.154175 2838 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 26 18:27:53.173347 kubelet[2838]: I0126 18:27:53.172505 2838 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 26 18:27:53.192074 kubelet[2838]: I0126 18:27:53.191337 2838 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 18:27:53.216340 kubelet[2838]: I0126 18:27:53.215287 2838 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 26 18:27:53.216340 kubelet[2838]: I0126 18:27:53.216232 2838 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 18:27:53.216602 kubelet[2838]: I0126 18:27:53.216254 2838 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 18:27:53.216602 kubelet[2838]: I0126 18:27:53.216565 2838 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 18:27:53.216602 kubelet[2838]: I0126 18:27:53.216575 2838 container_manager_linux.go:303] "Creating device plugin manager" Jan 26 18:27:53.217297 kubelet[2838]: I0126 18:27:53.216624 2838 state_mem.go:36] "Initialized new in-memory state store" Jan 26 18:27:53.217297 kubelet[2838]: I0126 18:27:53.217258 2838 kubelet.go:480] "Attempting to sync node with API server" Jan 26 18:27:53.217297 kubelet[2838]: I0126 18:27:53.217273 2838 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 18:27:53.224294 kubelet[2838]: I0126 18:27:53.222876 2838 kubelet.go:386] "Adding apiserver pod source" Jan 26 18:27:53.224294 kubelet[2838]: I0126 18:27:53.222895 2838 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 18:27:53.235510 kubelet[2838]: I0126 18:27:53.233554 2838 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 26 18:27:53.235510 kubelet[2838]: I0126 18:27:53.234501 2838 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 26 18:27:53.255692 kubelet[2838]: I0126 18:27:53.255523 2838 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 26 18:27:53.260150 kubelet[2838]: I0126 18:27:53.260063 2838 server.go:1289] "Started kubelet" Jan 26 18:27:53.272023 kubelet[2838]: I0126 18:27:53.271317 2838 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 18:27:53.275329 kubelet[2838]: I0126 18:27:53.274623 2838 server.go:317] "Adding debug handlers to kubelet server" Jan 26 18:27:53.282216 kubelet[2838]: I0126 18:27:53.281176 2838 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 18:27:53.282216 kubelet[2838]: I0126 18:27:53.281968 2838 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 18:27:53.284959 kubelet[2838]: I0126 18:27:53.284129 2838 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 18:27:53.290621 kubelet[2838]: I0126 18:27:53.289882 2838 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 26 18:27:53.291208 kubelet[2838]: I0126 18:27:53.291080 2838 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 26 18:27:53.292159 kubelet[2838]: I0126 18:27:53.291593 2838 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 26 18:27:53.292159 kubelet[2838]: I0126 18:27:53.292048 2838 reconciler.go:26] "Reconciler: start to sync state" Jan 26 18:27:53.315118 kubelet[2838]: I0126 18:27:53.315065 2838 factory.go:223] Registration of the systemd container factory successfully Jan 26 18:27:53.315118 kubelet[2838]: I0126 18:27:53.315176 2838 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 26 18:27:53.323726 kubelet[2838]: E0126 18:27:53.322536 2838 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 26 18:27:53.342605 kubelet[2838]: I0126 18:27:53.339727 2838 factory.go:223] Registration of the containerd container factory successfully Jan 26 18:27:53.417654 kubelet[2838]: I0126 18:27:53.415526 2838 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 26 18:27:53.436032 kubelet[2838]: I0126 18:27:53.435626 2838 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 26 18:27:53.436032 kubelet[2838]: I0126 18:27:53.435648 2838 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 26 18:27:53.436032 kubelet[2838]: I0126 18:27:53.435669 2838 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 26 18:27:53.436032 kubelet[2838]: I0126 18:27:53.435676 2838 kubelet.go:2436] "Starting kubelet main sync loop" Jan 26 18:27:53.436032 kubelet[2838]: E0126 18:27:53.435721 2838 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 18:27:53.538093 kubelet[2838]: E0126 18:27:53.537124 2838 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 26 18:27:53.543668 kubelet[2838]: I0126 18:27:53.543220 2838 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 26 18:27:53.543668 kubelet[2838]: I0126 18:27:53.543234 2838 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 26 18:27:53.543668 kubelet[2838]: I0126 18:27:53.543251 2838 state_mem.go:36] "Initialized new in-memory state store" Jan 26 18:27:53.543668 kubelet[2838]: I0126 18:27:53.543485 2838 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 26 18:27:53.543668 kubelet[2838]: I0126 18:27:53.543495 2838 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 26 18:27:53.543668 kubelet[2838]: I0126 18:27:53.543510 2838 policy_none.go:49] "None policy: Start" Jan 26 18:27:53.543668 kubelet[2838]: I0126 18:27:53.543519 2838 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 26 18:27:53.543668 kubelet[2838]: I0126 18:27:53.543529 2838 state_mem.go:35] "Initializing new in-memory state store" Jan 26 18:27:53.543668 kubelet[2838]: I0126 18:27:53.543606 2838 state_mem.go:75] "Updated machine memory state" Jan 26 18:27:53.558047 kubelet[2838]: E0126 18:27:53.558029 2838 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 26 18:27:53.558302 kubelet[2838]: I0126 18:27:53.558287 2838 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 18:27:53.562696 kubelet[2838]: I0126 18:27:53.559686 2838 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 18:27:53.562696 kubelet[2838]: I0126 18:27:53.562200 2838 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 18:27:53.575137 kubelet[2838]: E0126 18:27:53.575112 2838 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 26 18:27:53.712259 kubelet[2838]: I0126 18:27:53.711546 2838 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 26 18:27:53.739646 kubelet[2838]: I0126 18:27:53.738022 2838 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 26 18:27:53.739646 kubelet[2838]: I0126 18:27:53.738107 2838 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 26 18:27:53.741865 kubelet[2838]: I0126 18:27:53.741301 2838 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 26 18:27:53.745195 kubelet[2838]: I0126 18:27:53.744448 2838 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 26 18:27:53.747958 kubelet[2838]: I0126 18:27:53.747704 2838 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 26 18:27:53.758215 kubelet[2838]: E0126 18:27:53.757649 2838 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 26 18:27:53.762549 kubelet[2838]: E0126 18:27:53.762124 2838 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 26 18:27:53.801109 kubelet[2838]: I0126 18:27:53.800090 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50bbe4c81f3b3c758ec96669c31e251c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"50bbe4c81f3b3c758ec96669c31e251c\") " pod="kube-system/kube-apiserver-localhost" Jan 26 18:27:53.801109 kubelet[2838]: I0126 18:27:53.800120 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 26 18:27:53.801109 kubelet[2838]: I0126 18:27:53.800151 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 26 18:27:53.801109 kubelet[2838]: I0126 18:27:53.800174 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 26 18:27:53.801109 kubelet[2838]: I0126 18:27:53.800195 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 26 18:27:53.801650 kubelet[2838]: I0126 18:27:53.800215 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50bbe4c81f3b3c758ec96669c31e251c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"50bbe4c81f3b3c758ec96669c31e251c\") " pod="kube-system/kube-apiserver-localhost" Jan 26 18:27:53.801650 kubelet[2838]: I0126 18:27:53.800233 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50bbe4c81f3b3c758ec96669c31e251c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"50bbe4c81f3b3c758ec96669c31e251c\") " pod="kube-system/kube-apiserver-localhost" Jan 26 18:27:53.801650 kubelet[2838]: I0126 18:27:53.800267 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 26 18:27:53.801650 kubelet[2838]: I0126 18:27:53.800289 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 26 18:27:54.063200 kubelet[2838]: E0126 18:27:54.060193 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:54.063200 kubelet[2838]: E0126 18:27:54.063099 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:54.063644 kubelet[2838]: E0126 18:27:54.063224 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:54.229607 kubelet[2838]: I0126 18:27:54.229155 2838 apiserver.go:52] "Watching apiserver" Jan 26 18:27:54.292552 kubelet[2838]: I0126 18:27:54.292217 2838 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 26 18:27:54.504696 kubelet[2838]: E0126 18:27:54.503598 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:54.504696 kubelet[2838]: E0126 18:27:54.504654 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:54.505487 kubelet[2838]: E0126 18:27:54.505074 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:54.578204 kubelet[2838]: I0126 18:27:54.576223 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.57620639 podStartE2EDuration="4.57620639s" podCreationTimestamp="2026-01-26 18:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:27:54.554189977 +0000 UTC m=+1.622158307" watchObservedRunningTime="2026-01-26 18:27:54.57620639 +0000 UTC m=+1.644174720" Jan 26 18:27:54.606711 kubelet[2838]: I0126 18:27:54.606631 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.606616924 podStartE2EDuration="4.606616924s" podCreationTimestamp="2026-01-26 18:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:27:54.604206671 +0000 UTC m=+1.672175023" watchObservedRunningTime="2026-01-26 18:27:54.606616924 +0000 UTC m=+1.674585255" Jan 26 18:27:54.607191 kubelet[2838]: I0126 18:27:54.606984 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.606976422 podStartE2EDuration="1.606976422s" podCreationTimestamp="2026-01-26 18:27:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:27:54.577718814 +0000 UTC m=+1.645687135" watchObservedRunningTime="2026-01-26 18:27:54.606976422 +0000 UTC m=+1.674944793" Jan 26 18:27:55.509629 kubelet[2838]: E0126 18:27:55.509129 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:55.509629 kubelet[2838]: E0126 18:27:55.509128 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:56.062283 kubelet[2838]: I0126 18:27:56.062029 2838 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 26 18:27:56.062730 containerd[1602]: time="2026-01-26T18:27:56.062472488Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 26 18:27:56.064883 kubelet[2838]: I0126 18:27:56.064533 2838 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 26 18:27:57.134690 systemd[1]: Created slice kubepods-besteffort-pod5e804a4b_7136_4ed7_be76_ae9408a6e5b4.slice - libcontainer container kubepods-besteffort-pod5e804a4b_7136_4ed7_be76_ae9408a6e5b4.slice. Jan 26 18:27:57.138697 kubelet[2838]: I0126 18:27:57.138584 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvxxv\" (UniqueName: \"kubernetes.io/projected/5e804a4b-7136-4ed7-be76-ae9408a6e5b4-kube-api-access-rvxxv\") pod \"kube-proxy-5srsf\" (UID: \"5e804a4b-7136-4ed7-be76-ae9408a6e5b4\") " pod="kube-system/kube-proxy-5srsf" Jan 26 18:27:57.138697 kubelet[2838]: I0126 18:27:57.138613 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5e804a4b-7136-4ed7-be76-ae9408a6e5b4-kube-proxy\") pod \"kube-proxy-5srsf\" (UID: \"5e804a4b-7136-4ed7-be76-ae9408a6e5b4\") " pod="kube-system/kube-proxy-5srsf" Jan 26 18:27:57.138697 kubelet[2838]: I0126 18:27:57.138630 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e804a4b-7136-4ed7-be76-ae9408a6e5b4-lib-modules\") pod \"kube-proxy-5srsf\" (UID: \"5e804a4b-7136-4ed7-be76-ae9408a6e5b4\") " pod="kube-system/kube-proxy-5srsf" Jan 26 18:27:57.138697 kubelet[2838]: I0126 18:27:57.138645 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e804a4b-7136-4ed7-be76-ae9408a6e5b4-xtables-lock\") pod \"kube-proxy-5srsf\" (UID: \"5e804a4b-7136-4ed7-be76-ae9408a6e5b4\") " pod="kube-system/kube-proxy-5srsf" Jan 26 18:27:57.277404 systemd[1]: Created slice kubepods-besteffort-pode0a2fb64_8d8d_4263_adb5_5484dc389cd8.slice - libcontainer container kubepods-besteffort-pode0a2fb64_8d8d_4263_adb5_5484dc389cd8.slice. Jan 26 18:27:57.343971 kubelet[2838]: I0126 18:27:57.343530 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tznjh\" (UniqueName: \"kubernetes.io/projected/e0a2fb64-8d8d-4263-adb5-5484dc389cd8-kube-api-access-tznjh\") pod \"tigera-operator-7dcd859c48-r684s\" (UID: \"e0a2fb64-8d8d-4263-adb5-5484dc389cd8\") " pod="tigera-operator/tigera-operator-7dcd859c48-r684s" Jan 26 18:27:57.343971 kubelet[2838]: I0126 18:27:57.343697 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e0a2fb64-8d8d-4263-adb5-5484dc389cd8-var-lib-calico\") pod \"tigera-operator-7dcd859c48-r684s\" (UID: \"e0a2fb64-8d8d-4263-adb5-5484dc389cd8\") " pod="tigera-operator/tigera-operator-7dcd859c48-r684s" Jan 26 18:27:57.456347 kubelet[2838]: E0126 18:27:57.455528 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:57.461043 containerd[1602]: time="2026-01-26T18:27:57.460557118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5srsf,Uid:5e804a4b-7136-4ed7-be76-ae9408a6e5b4,Namespace:kube-system,Attempt:0,}" Jan 26 18:27:57.600508 containerd[1602]: time="2026-01-26T18:27:57.600338938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-r684s,Uid:e0a2fb64-8d8d-4263-adb5-5484dc389cd8,Namespace:tigera-operator,Attempt:0,}" Jan 26 18:27:57.648446 containerd[1602]: time="2026-01-26T18:27:57.648224035Z" level=info msg="connecting to shim 652510214f815cb417e92451a061d6c47a5d2b4115d75894194b8c5e7a92fd57" address="unix:///run/containerd/s/0851fe5c29fa167995f0336949f4cb059fb3f830fbd7ee0c7f53bc4822b04c1a" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:27:57.779987 containerd[1602]: time="2026-01-26T18:27:57.778476680Z" level=info msg="connecting to shim bb7ca0a5b3ad4b39ee4c5373643d72c4ba3e5f193d4459d817b52da2acefedb9" address="unix:///run/containerd/s/b7e5d8b417980513a762b10e02cfbabd38d1b33220e98ede29d94104467e96c2" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:27:57.907639 systemd[1]: Started cri-containerd-652510214f815cb417e92451a061d6c47a5d2b4115d75894194b8c5e7a92fd57.scope - libcontainer container 652510214f815cb417e92451a061d6c47a5d2b4115d75894194b8c5e7a92fd57. Jan 26 18:27:57.917364 systemd[1]: Started cri-containerd-bb7ca0a5b3ad4b39ee4c5373643d72c4ba3e5f193d4459d817b52da2acefedb9.scope - libcontainer container bb7ca0a5b3ad4b39ee4c5373643d72c4ba3e5f193d4459d817b52da2acefedb9. Jan 26 18:27:57.986066 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 26 18:27:57.986645 kernel: audit: type=1334 audit(1769452077.972:431): prog-id=133 op=LOAD Jan 26 18:27:57.972000 audit: BPF prog-id=133 op=LOAD Jan 26 18:27:57.995464 kernel: audit: type=1334 audit(1769452077.975:432): prog-id=134 op=LOAD Jan 26 18:27:57.975000 audit: BPF prog-id=134 op=LOAD Jan 26 18:27:57.975000 audit[2920]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2906 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:58.050593 kernel: audit: type=1300 audit(1769452077.975:432): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2906 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:57.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635323531303231346638313563623431376539323435316130363164 Jan 26 18:27:58.101302 kernel: audit: type=1327 audit(1769452077.975:432): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635323531303231346638313563623431376539323435316130363164 Jan 26 18:27:57.975000 audit: BPF prog-id=134 op=UNLOAD Jan 26 18:27:57.975000 audit[2920]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2906 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:58.154438 kernel: audit: type=1334 audit(1769452077.975:433): prog-id=134 op=UNLOAD Jan 26 18:27:58.154540 kernel: audit: type=1300 audit(1769452077.975:433): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2906 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:58.154574 kernel: audit: type=1327 audit(1769452077.975:433): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635323531303231346638313563623431376539323435316130363164 Jan 26 18:27:57.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635323531303231346638313563623431376539323435316130363164 Jan 26 18:27:58.196281 kernel: audit: type=1334 audit(1769452077.975:434): prog-id=135 op=LOAD Jan 26 18:27:57.975000 audit: BPF prog-id=135 op=LOAD Jan 26 18:27:57.975000 audit[2920]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2906 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:58.227044 containerd[1602]: time="2026-01-26T18:27:58.226567590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5srsf,Uid:5e804a4b-7136-4ed7-be76-ae9408a6e5b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"652510214f815cb417e92451a061d6c47a5d2b4115d75894194b8c5e7a92fd57\"" Jan 26 18:27:58.244339 kubelet[2838]: E0126 18:27:58.244312 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:58.255409 kernel: audit: type=1300 audit(1769452077.975:434): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2906 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:57.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635323531303231346638313563623431376539323435316130363164 Jan 26 18:27:58.311717 kernel: audit: type=1327 audit(1769452077.975:434): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635323531303231346638313563623431376539323435316130363164 Jan 26 18:27:57.975000 audit: BPF prog-id=136 op=LOAD Jan 26 18:27:57.975000 audit[2920]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2906 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:57.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635323531303231346638313563623431376539323435316130363164 Jan 26 18:27:57.975000 audit: BPF prog-id=136 op=UNLOAD Jan 26 18:27:57.975000 audit[2920]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2906 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:57.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635323531303231346638313563623431376539323435316130363164 Jan 26 18:27:57.975000 audit: BPF prog-id=135 op=UNLOAD Jan 26 18:27:57.975000 audit[2920]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2906 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:57.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635323531303231346638313563623431376539323435316130363164 Jan 26 18:27:57.975000 audit: BPF prog-id=137 op=LOAD Jan 26 18:27:57.975000 audit[2920]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2906 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:57.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635323531303231346638313563623431376539323435316130363164 Jan 26 18:27:57.993000 audit: BPF prog-id=138 op=LOAD Jan 26 18:27:57.996000 audit: BPF prog-id=139 op=LOAD Jan 26 18:27:57.996000 audit[2943]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2926 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:57.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262376361306135623361643462333965653463353337333634336437 Jan 26 18:27:57.997000 audit: BPF prog-id=139 op=UNLOAD Jan 26 18:27:57.997000 audit[2943]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2926 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:57.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262376361306135623361643462333965653463353337333634336437 Jan 26 18:27:57.997000 audit: BPF prog-id=140 op=LOAD Jan 26 18:27:57.997000 audit[2943]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2926 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:57.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262376361306135623361643462333965653463353337333634336437 Jan 26 18:27:57.997000 audit: BPF prog-id=141 op=LOAD Jan 26 18:27:57.997000 audit[2943]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2926 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:57.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262376361306135623361643462333965653463353337333634336437 Jan 26 18:27:57.997000 audit: BPF prog-id=141 op=UNLOAD Jan 26 18:27:57.997000 audit[2943]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2926 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:57.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262376361306135623361643462333965653463353337333634336437 Jan 26 18:27:57.997000 audit: BPF prog-id=140 op=UNLOAD Jan 26 18:27:57.997000 audit[2943]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2926 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:57.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262376361306135623361643462333965653463353337333634336437 Jan 26 18:27:57.997000 audit: BPF prog-id=142 op=LOAD Jan 26 18:27:57.997000 audit[2943]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2926 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:57.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262376361306135623361643462333965653463353337333634336437 Jan 26 18:27:58.320370 containerd[1602]: time="2026-01-26T18:27:58.319715507Z" level=info msg="CreateContainer within sandbox \"652510214f815cb417e92451a061d6c47a5d2b4115d75894194b8c5e7a92fd57\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 26 18:27:58.333297 containerd[1602]: time="2026-01-26T18:27:58.333253640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-r684s,Uid:e0a2fb64-8d8d-4263-adb5-5484dc389cd8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bb7ca0a5b3ad4b39ee4c5373643d72c4ba3e5f193d4459d817b52da2acefedb9\"" Jan 26 18:27:58.343332 kubelet[2838]: E0126 18:27:58.343047 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:58.343433 containerd[1602]: time="2026-01-26T18:27:58.343373451Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 26 18:27:58.386340 containerd[1602]: time="2026-01-26T18:27:58.385573177Z" level=info msg="Container c55312de915d4f88a43600f52042d12e375b00b5a1991492933f42930c1359bf: CDI devices from CRI Config.CDIDevices: []" Jan 26 18:27:58.390733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3668784038.mount: Deactivated successfully. Jan 26 18:27:58.419238 containerd[1602]: time="2026-01-26T18:27:58.418521524Z" level=info msg="CreateContainer within sandbox \"652510214f815cb417e92451a061d6c47a5d2b4115d75894194b8c5e7a92fd57\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c55312de915d4f88a43600f52042d12e375b00b5a1991492933f42930c1359bf\"" Jan 26 18:27:58.421602 containerd[1602]: time="2026-01-26T18:27:58.421558350Z" level=info msg="StartContainer for \"c55312de915d4f88a43600f52042d12e375b00b5a1991492933f42930c1359bf\"" Jan 26 18:27:58.426332 containerd[1602]: time="2026-01-26T18:27:58.424934814Z" level=info msg="connecting to shim c55312de915d4f88a43600f52042d12e375b00b5a1991492933f42930c1359bf" address="unix:///run/containerd/s/0851fe5c29fa167995f0336949f4cb059fb3f830fbd7ee0c7f53bc4822b04c1a" protocol=ttrpc version=3 Jan 26 18:27:58.532272 systemd[1]: Started cri-containerd-c55312de915d4f88a43600f52042d12e375b00b5a1991492933f42930c1359bf.scope - libcontainer container c55312de915d4f88a43600f52042d12e375b00b5a1991492933f42930c1359bf. Jan 26 18:27:58.562571 kubelet[2838]: E0126 18:27:58.561424 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:58.668000 audit: BPF prog-id=143 op=LOAD Jan 26 18:27:58.668000 audit[2986]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2906 pid=2986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:58.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335353331326465393135643466383861343336303066353230343264 Jan 26 18:27:58.668000 audit: BPF prog-id=144 op=LOAD Jan 26 18:27:58.668000 audit[2986]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2906 pid=2986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:58.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335353331326465393135643466383861343336303066353230343264 Jan 26 18:27:58.668000 audit: BPF prog-id=144 op=UNLOAD Jan 26 18:27:58.668000 audit[2986]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2906 pid=2986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:58.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335353331326465393135643466383861343336303066353230343264 Jan 26 18:27:58.668000 audit: BPF prog-id=143 op=UNLOAD Jan 26 18:27:58.668000 audit[2986]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2906 pid=2986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:58.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335353331326465393135643466383861343336303066353230343264 Jan 26 18:27:58.668000 audit: BPF prog-id=145 op=LOAD Jan 26 18:27:58.668000 audit[2986]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2906 pid=2986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:58.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335353331326465393135643466383861343336303066353230343264 Jan 26 18:27:58.812147 containerd[1602]: time="2026-01-26T18:27:58.811327443Z" level=info msg="StartContainer for \"c55312de915d4f88a43600f52042d12e375b00b5a1991492933f42930c1359bf\" returns successfully" Jan 26 18:27:59.328000 audit[3057]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3057 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:59.328000 audit[3057]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe541111e0 a2=0 a3=7ffe541111cc items=0 ppid=3000 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.328000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 26 18:27:59.333000 audit[3056]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:59.333000 audit[3056]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd40f67270 a2=0 a3=7ffd40f6725c items=0 ppid=3000 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.333000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 26 18:27:59.341000 audit[3061]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:59.341000 audit[3061]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffda45a3e0 a2=0 a3=7fffda45a3cc items=0 ppid=3000 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.341000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 26 18:27:59.345000 audit[3060]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3060 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:59.345000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec9155e10 a2=0 a3=7ffec9155dfc items=0 ppid=3000 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.345000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 26 18:27:59.360000 audit[3064]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3064 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:59.360000 audit[3064]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd51404d30 a2=0 a3=7ffd51404d1c items=0 ppid=3000 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.360000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 26 18:27:59.365000 audit[3063]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:59.365000 audit[3063]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdedddd6d0 a2=0 a3=7ffdedddd6bc items=0 ppid=3000 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.365000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 26 18:27:59.447000 audit[3066]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3066 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:59.447000 audit[3066]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffee85d1690 a2=0 a3=7ffee85d167c items=0 ppid=3000 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.447000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 26 18:27:59.465000 audit[3068]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3068 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:59.465000 audit[3068]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc06621890 a2=0 a3=7ffc0662187c items=0 ppid=3000 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.465000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 26 18:27:59.493000 audit[3071]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:59.493000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff3cf8a9f0 a2=0 a3=7fff3cf8a9dc items=0 ppid=3000 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.493000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 26 18:27:59.501000 audit[3072]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3072 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:59.501000 audit[3072]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca78de2e0 a2=0 a3=7ffca78de2cc items=0 ppid=3000 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.501000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 26 18:27:59.521000 audit[3074]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3074 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:59.521000 audit[3074]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc315ccb20 a2=0 a3=7ffc315ccb0c items=0 ppid=3000 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.521000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 26 18:27:59.529000 audit[3075]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3075 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:59.529000 audit[3075]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff6790780 a2=0 a3=7ffff679076c items=0 ppid=3000 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.529000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 26 18:27:59.549000 audit[3077]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3077 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:59.549000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffcfe9c110 a2=0 a3=7fffcfe9c0fc items=0 ppid=3000 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.549000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 26 18:27:59.576929 kubelet[2838]: E0126 18:27:59.576418 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:59.578166 kubelet[2838]: E0126 18:27:59.577678 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:27:59.588000 audit[3080]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3080 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:59.588000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc445cc020 a2=0 a3=7ffc445cc00c items=0 ppid=3000 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.588000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 26 18:27:59.607432 kubelet[2838]: I0126 18:27:59.606704 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5srsf" podStartSLOduration=2.606686328 podStartE2EDuration="2.606686328s" podCreationTimestamp="2026-01-26 18:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:27:59.606497474 +0000 UTC m=+6.674465806" watchObservedRunningTime="2026-01-26 18:27:59.606686328 +0000 UTC m=+6.674654679" Jan 26 18:27:59.610000 audit[3081]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3081 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:59.610000 audit[3081]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef13ec8a0 a2=0 a3=7ffef13ec88c items=0 ppid=3000 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.610000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 26 18:27:59.630000 audit[3083]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3083 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:59.630000 audit[3083]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe0aca8f40 a2=0 a3=7ffe0aca8f2c items=0 ppid=3000 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.630000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 26 18:27:59.636000 audit[3084]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3084 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:59.636000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2be45430 a2=0 a3=7ffe2be4541c items=0 ppid=3000 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.636000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 26 18:27:59.654000 audit[3086]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3086 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:59.654000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff98554e30 a2=0 a3=7fff98554e1c items=0 ppid=3000 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.654000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 26 18:27:59.685000 audit[3089]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:59.685000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd40359d60 a2=0 a3=7ffd40359d4c items=0 ppid=3000 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.685000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 26 18:27:59.710000 audit[3092]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3092 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:59.710000 audit[3092]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd87cd6d60 a2=0 a3=7ffd87cd6d4c items=0 ppid=3000 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.710000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 26 18:27:59.718000 audit[3093]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3093 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:59.718000 audit[3093]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffebe889240 a2=0 a3=7ffebe88922c items=0 ppid=3000 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.718000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 26 18:27:59.738000 audit[3095]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:59.738000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc7cfbce90 a2=0 a3=7ffc7cfbce7c items=0 ppid=3000 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.738000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 26 18:27:59.766000 audit[3098]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:59.766000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe7a9f36c0 a2=0 a3=7ffe7a9f36ac items=0 ppid=3000 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.766000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 26 18:27:59.778000 audit[3099]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3099 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:59.778000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffefe333e20 a2=0 a3=7ffefe333e0c items=0 ppid=3000 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.778000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 26 18:27:59.802000 audit[3101]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:27:59.802000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc1ed20100 a2=0 a3=7ffc1ed200ec items=0 ppid=3000 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.802000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 26 18:27:59.921000 audit[3107]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:27:59.921000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe1f97a980 a2=0 a3=7ffe1f97a96c items=0 ppid=3000 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.921000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:27:59.950000 audit[3107]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:27:59.950000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe1f97a980 a2=0 a3=7ffe1f97a96c items=0 ppid=3000 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.950000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:27:59.957000 audit[3112]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3112 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:59.957000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffeda65bc50 a2=0 a3=7ffeda65bc3c items=0 ppid=3000 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.957000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 26 18:27:59.972000 audit[3114]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:59.972000 audit[3114]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe49345ce0 a2=0 a3=7ffe49345ccc items=0 ppid=3000 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.972000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 26 18:27:59.995000 audit[3117]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3117 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:27:59.995000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffcab677a0 a2=0 a3=7fffcab6778c items=0 ppid=3000 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:27:59.995000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 26 18:28:00.004000 audit[3118]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3118 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:28:00.004000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd62630a30 a2=0 a3=7ffd62630a1c items=0 ppid=3000 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:00.004000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 26 18:28:00.021000 audit[3120]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3120 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:28:00.021000 audit[3120]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcd1c46340 a2=0 a3=7ffcd1c4632c items=0 ppid=3000 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:00.021000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 26 18:28:00.029000 audit[3121]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3121 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:28:00.029000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff63a907d0 a2=0 a3=7fff63a907bc items=0 ppid=3000 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:00.029000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 26 18:28:00.047000 audit[3123]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3123 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:28:00.047000 audit[3123]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffb45a54b0 a2=0 a3=7fffb45a549c items=0 ppid=3000 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:00.047000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 26 18:28:00.079000 audit[3126]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3126 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:28:00.079000 audit[3126]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffcf2d50340 a2=0 a3=7ffcf2d5032c items=0 ppid=3000 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:00.079000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 26 18:28:00.089000 audit[3127]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3127 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:28:00.089000 audit[3127]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeefa07c90 a2=0 a3=7ffeefa07c7c items=0 ppid=3000 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:00.089000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 26 18:28:00.106000 audit[3129]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3129 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:28:00.106000 audit[3129]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdd9797240 a2=0 a3=7ffdd979722c items=0 ppid=3000 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:00.106000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 26 18:28:00.114000 audit[3130]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3130 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:28:00.114000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc2ffba3a0 a2=0 a3=7ffc2ffba38c items=0 ppid=3000 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:00.114000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 26 18:28:00.130000 audit[3132]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3132 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:28:00.130000 audit[3132]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffefe2356c0 a2=0 a3=7ffefe2356ac items=0 ppid=3000 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:00.130000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 26 18:28:00.158000 audit[3135]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3135 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:28:00.158000 audit[3135]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffddb1e9e10 a2=0 a3=7ffddb1e9dfc items=0 ppid=3000 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:00.158000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 26 18:28:00.183000 audit[3138]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3138 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:28:00.183000 audit[3138]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff88604470 a2=0 a3=7fff8860445c items=0 ppid=3000 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:00.183000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 26 18:28:00.189000 audit[3139]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3139 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:28:00.189000 audit[3139]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffebd05d120 a2=0 a3=7ffebd05d10c items=0 ppid=3000 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:00.189000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 26 18:28:00.223000 audit[3141]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3141 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:28:00.223000 audit[3141]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffc5dfb340 a2=0 a3=7fffc5dfb32c items=0 ppid=3000 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:00.223000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 26 18:28:00.266000 audit[3144]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3144 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:28:00.266000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe42e17e70 a2=0 a3=7ffe42e17e5c items=0 ppid=3000 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:00.266000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 26 18:28:00.279000 audit[3145]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3145 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:28:00.279000 audit[3145]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7f8f8170 a2=0 a3=7ffd7f8f815c items=0 ppid=3000 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:00.279000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 26 18:28:00.298000 audit[3147]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3147 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:28:00.298000 audit[3147]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff7fb5a360 a2=0 a3=7fff7fb5a34c items=0 ppid=3000 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:00.298000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 26 18:28:00.310000 audit[3148]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3148 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:28:00.310000 audit[3148]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8bab0190 a2=0 a3=7ffe8bab017c items=0 ppid=3000 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:00.310000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 26 18:28:00.327000 audit[3150]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3150 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:28:00.327000 audit[3150]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc39500470 a2=0 a3=7ffc3950045c items=0 ppid=3000 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:00.327000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 26 18:28:00.365000 audit[3153]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3153 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:28:00.365000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdc6b1c200 a2=0 a3=7ffdc6b1c1ec items=0 ppid=3000 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:00.365000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 26 18:28:00.394000 audit[3155]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3155 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 26 18:28:00.394000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffdcad72520 a2=0 a3=7ffdcad7250c items=0 ppid=3000 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:00.394000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:00.395000 audit[3155]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3155 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 26 18:28:00.395000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffdcad72520 a2=0 a3=7ffdcad7250c items=0 ppid=3000 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:00.395000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:00.528570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4020624852.mount: Deactivated successfully. Jan 26 18:28:00.581192 kubelet[2838]: E0126 18:28:00.581072 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:01.096247 kubelet[2838]: E0126 18:28:01.095645 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:01.582730 kubelet[2838]: E0126 18:28:01.582361 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:02.590171 kubelet[2838]: E0126 18:28:02.589210 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:03.056293 kubelet[2838]: E0126 18:28:03.056249 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:03.607240 kubelet[2838]: E0126 18:28:03.606613 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:06.652954 containerd[1602]: time="2026-01-26T18:28:06.652652739Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:28:06.657284 containerd[1602]: time="2026-01-26T18:28:06.657084077Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 26 18:28:06.661006 containerd[1602]: time="2026-01-26T18:28:06.660589036Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:28:06.666542 containerd[1602]: time="2026-01-26T18:28:06.666132099Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:28:06.667638 containerd[1602]: time="2026-01-26T18:28:06.667146858Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 8.32373066s" Jan 26 18:28:06.667638 containerd[1602]: time="2026-01-26T18:28:06.667198133Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 26 18:28:06.680332 containerd[1602]: time="2026-01-26T18:28:06.680086222Z" level=info msg="CreateContainer within sandbox \"bb7ca0a5b3ad4b39ee4c5373643d72c4ba3e5f193d4459d817b52da2acefedb9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 26 18:28:06.709726 containerd[1602]: time="2026-01-26T18:28:06.709274497Z" level=info msg="Container 41f0cdabd890b507fa38f9285942f091a372d7976444944634163e7545be7864: CDI devices from CRI Config.CDIDevices: []" Jan 26 18:28:06.726950 containerd[1602]: time="2026-01-26T18:28:06.726684224Z" level=info msg="CreateContainer within sandbox \"bb7ca0a5b3ad4b39ee4c5373643d72c4ba3e5f193d4459d817b52da2acefedb9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"41f0cdabd890b507fa38f9285942f091a372d7976444944634163e7545be7864\"" Jan 26 18:28:06.729707 containerd[1602]: time="2026-01-26T18:28:06.729460062Z" level=info msg="StartContainer for \"41f0cdabd890b507fa38f9285942f091a372d7976444944634163e7545be7864\"" Jan 26 18:28:06.733204 containerd[1602]: time="2026-01-26T18:28:06.731020057Z" level=info msg="connecting to shim 41f0cdabd890b507fa38f9285942f091a372d7976444944634163e7545be7864" address="unix:///run/containerd/s/b7e5d8b417980513a762b10e02cfbabd38d1b33220e98ede29d94104467e96c2" protocol=ttrpc version=3 Jan 26 18:28:06.796398 systemd[1]: Started cri-containerd-41f0cdabd890b507fa38f9285942f091a372d7976444944634163e7545be7864.scope - libcontainer container 41f0cdabd890b507fa38f9285942f091a372d7976444944634163e7545be7864. Jan 26 18:28:06.854000 audit: BPF prog-id=146 op=LOAD Jan 26 18:28:06.865471 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 26 18:28:06.865611 kernel: audit: type=1334 audit(1769452086.854:503): prog-id=146 op=LOAD Jan 26 18:28:06.876178 kernel: audit: type=1334 audit(1769452086.856:504): prog-id=147 op=LOAD Jan 26 18:28:06.856000 audit: BPF prog-id=147 op=LOAD Jan 26 18:28:06.856000 audit[3164]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2926 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:06.927193 kernel: audit: type=1300 audit(1769452086.856:504): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2926 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:06.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431663063646162643839306235303766613338663932383539343266 Jan 26 18:28:06.856000 audit: BPF prog-id=147 op=UNLOAD Jan 26 18:28:06.978537 kernel: audit: type=1327 audit(1769452086.856:504): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431663063646162643839306235303766613338663932383539343266 Jan 26 18:28:06.978982 kernel: audit: type=1334 audit(1769452086.856:505): prog-id=147 op=UNLOAD Jan 26 18:28:06.979016 kernel: audit: type=1300 audit(1769452086.856:505): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2926 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:06.856000 audit[3164]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2926 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:06.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431663063646162643839306235303766613338663932383539343266 Jan 26 18:28:07.032975 containerd[1602]: time="2026-01-26T18:28:07.030634210Z" level=info msg="StartContainer for \"41f0cdabd890b507fa38f9285942f091a372d7976444944634163e7545be7864\" returns successfully" Jan 26 18:28:07.053574 kernel: audit: type=1327 audit(1769452086.856:505): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431663063646162643839306235303766613338663932383539343266 Jan 26 18:28:07.055212 kernel: audit: type=1334 audit(1769452086.856:506): prog-id=148 op=LOAD Jan 26 18:28:06.856000 audit: BPF prog-id=148 op=LOAD Jan 26 18:28:06.856000 audit[3164]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2926 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:07.105539 kernel: audit: type=1300 audit(1769452086.856:506): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2926 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:07.105609 kernel: audit: type=1327 audit(1769452086.856:506): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431663063646162643839306235303766613338663932383539343266 Jan 26 18:28:06.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431663063646162643839306235303766613338663932383539343266 Jan 26 18:28:06.856000 audit: BPF prog-id=149 op=LOAD Jan 26 18:28:06.856000 audit[3164]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2926 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:06.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431663063646162643839306235303766613338663932383539343266 Jan 26 18:28:06.856000 audit: BPF prog-id=149 op=UNLOAD Jan 26 18:28:06.856000 audit[3164]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2926 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:06.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431663063646162643839306235303766613338663932383539343266 Jan 26 18:28:06.856000 audit: BPF prog-id=148 op=UNLOAD Jan 26 18:28:06.856000 audit[3164]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2926 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:06.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431663063646162643839306235303766613338663932383539343266 Jan 26 18:28:06.856000 audit: BPF prog-id=150 op=LOAD Jan 26 18:28:06.856000 audit[3164]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2926 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:06.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431663063646162643839306235303766613338663932383539343266 Jan 26 18:28:07.648501 kubelet[2838]: I0126 18:28:07.648452 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-r684s" podStartSLOduration=2.316996464 podStartE2EDuration="10.648438826s" podCreationTimestamp="2026-01-26 18:27:57 +0000 UTC" firstStartedPulling="2026-01-26 18:27:58.33757133 +0000 UTC m=+5.405539662" lastFinishedPulling="2026-01-26 18:28:06.669013692 +0000 UTC m=+13.736982024" observedRunningTime="2026-01-26 18:28:07.639341257 +0000 UTC m=+14.707309588" watchObservedRunningTime="2026-01-26 18:28:07.648438826 +0000 UTC m=+14.716407157" Jan 26 18:28:14.861000 audit[3238]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:14.882033 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 26 18:28:14.882135 kernel: audit: type=1325 audit(1769452094.861:511): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:14.861000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff2c6bfe70 a2=0 a3=7fff2c6bfe5c items=0 ppid=3000 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:14.959937 kernel: audit: type=1300 audit(1769452094.861:511): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff2c6bfe70 a2=0 a3=7fff2c6bfe5c items=0 ppid=3000 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:14.861000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:14.987097 kernel: audit: type=1327 audit(1769452094.861:511): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:14.884000 audit[3238]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:15.013064 kernel: audit: type=1325 audit(1769452094.884:512): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:14.884000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff2c6bfe70 a2=0 a3=0 items=0 ppid=3000 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:15.096230 kernel: audit: type=1300 audit(1769452094.884:512): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff2c6bfe70 a2=0 a3=0 items=0 ppid=3000 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:15.096357 kernel: audit: type=1327 audit(1769452094.884:512): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:14.884000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:15.182000 audit[3240]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3240 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:15.215498 kernel: audit: type=1325 audit(1769452095.182:513): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3240 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:15.220246 kernel: audit: type=1300 audit(1769452095.182:513): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd2143cb70 a2=0 a3=7ffd2143cb5c items=0 ppid=3000 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:15.182000 audit[3240]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd2143cb70 a2=0 a3=7ffd2143cb5c items=0 ppid=3000 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:15.182000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:15.298028 kernel: audit: type=1327 audit(1769452095.182:513): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:15.214000 audit[3240]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3240 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:15.329299 kernel: audit: type=1325 audit(1769452095.214:514): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3240 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:15.214000 audit[3240]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd2143cb70 a2=0 a3=0 items=0 ppid=3000 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:15.214000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:16.211000 audit[1846]: USER_END pid=1846 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 26 18:28:16.212286 sudo[1846]: pam_unix(sudo:session): session closed for user root Jan 26 18:28:16.214000 audit[1846]: CRED_DISP pid=1846 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 26 18:28:16.230944 sshd[1845]: Connection closed by 10.0.0.1 port 33482 Jan 26 18:28:16.230070 sshd-session[1839]: pam_unix(sshd:session): session closed for user core Jan 26 18:28:16.238000 audit[1839]: USER_END pid=1839 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:28:16.238000 audit[1839]: CRED_DISP pid=1839 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:28:16.251198 systemd-logind[1580]: Session 10 logged out. Waiting for processes to exit. Jan 26 18:28:16.251309 systemd[1]: sshd@8-10.0.0.106:22-10.0.0.1:33482.service: Deactivated successfully. Jan 26 18:28:16.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.106:22-10.0.0.1:33482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:28:16.263153 systemd[1]: session-10.scope: Deactivated successfully. Jan 26 18:28:16.266726 systemd[1]: session-10.scope: Consumed 10.384s CPU time, 219.8M memory peak. Jan 26 18:28:16.286158 systemd-logind[1580]: Removed session 10. Jan 26 18:28:19.691000 audit[3264]: NETFILTER_CFG table=filter:109 family=2 entries=16 op=nft_register_rule pid=3264 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:19.691000 audit[3264]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdf5721d00 a2=0 a3=7ffdf5721cec items=0 ppid=3000 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:19.691000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:19.702000 audit[3264]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3264 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:19.702000 audit[3264]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdf5721d00 a2=0 a3=0 items=0 ppid=3000 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:19.702000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:19.813000 audit[3266]: NETFILTER_CFG table=filter:111 family=2 entries=17 op=nft_register_rule pid=3266 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:19.813000 audit[3266]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd23c66320 a2=0 a3=7ffd23c6630c items=0 ppid=3000 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:19.813000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:19.826000 audit[3266]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3266 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:19.826000 audit[3266]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd23c66320 a2=0 a3=0 items=0 ppid=3000 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:19.826000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:20.878416 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 26 18:28:20.878699 kernel: audit: type=1325 audit(1769452100.863:524): table=filter:113 family=2 entries=19 op=nft_register_rule pid=3268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:20.863000 audit[3268]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:20.903221 kernel: audit: type=1300 audit(1769452100.863:524): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffbf7cd3b0 a2=0 a3=7fffbf7cd39c items=0 ppid=3000 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:20.863000 audit[3268]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffbf7cd3b0 a2=0 a3=7fffbf7cd39c items=0 ppid=3000 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:20.863000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:20.982370 kernel: audit: type=1327 audit(1769452100.863:524): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:20.982476 kernel: audit: type=1325 audit(1769452100.918:525): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:20.918000 audit[3268]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:20.918000 audit[3268]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffbf7cd3b0 a2=0 a3=0 items=0 ppid=3000 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:21.058167 kernel: audit: type=1300 audit(1769452100.918:525): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffbf7cd3b0 a2=0 a3=0 items=0 ppid=3000 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:21.058277 kernel: audit: type=1327 audit(1769452100.918:525): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:20.918000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:22.999351 systemd[1]: Created slice kubepods-besteffort-pod7f297736_8350_422b_aa67_c8422674c3f8.slice - libcontainer container kubepods-besteffort-pod7f297736_8350_422b_aa67_c8422674c3f8.slice. Jan 26 18:28:23.012998 kubelet[2838]: I0126 18:28:23.012110 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5zsm\" (UniqueName: \"kubernetes.io/projected/7f297736-8350-422b-aa67-c8422674c3f8-kube-api-access-t5zsm\") pod \"calico-typha-575c6f599d-4tfcp\" (UID: \"7f297736-8350-422b-aa67-c8422674c3f8\") " pod="calico-system/calico-typha-575c6f599d-4tfcp" Jan 26 18:28:23.012998 kubelet[2838]: I0126 18:28:23.012267 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f297736-8350-422b-aa67-c8422674c3f8-tigera-ca-bundle\") pod \"calico-typha-575c6f599d-4tfcp\" (UID: \"7f297736-8350-422b-aa67-c8422674c3f8\") " pod="calico-system/calico-typha-575c6f599d-4tfcp" Jan 26 18:28:23.012998 kubelet[2838]: I0126 18:28:23.012300 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7f297736-8350-422b-aa67-c8422674c3f8-typha-certs\") pod \"calico-typha-575c6f599d-4tfcp\" (UID: \"7f297736-8350-422b-aa67-c8422674c3f8\") " pod="calico-system/calico-typha-575c6f599d-4tfcp" Jan 26 18:28:23.012000 audit[3270]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3270 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:23.044955 kernel: audit: type=1325 audit(1769452103.012:526): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3270 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:23.012000 audit[3270]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc57472a70 a2=0 a3=7ffc57472a5c items=0 ppid=3000 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:23.095066 kernel: audit: type=1300 audit(1769452103.012:526): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc57472a70 a2=0 a3=7ffc57472a5c items=0 ppid=3000 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:23.012000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:23.124603 kernel: audit: type=1327 audit(1769452103.012:526): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:23.049000 audit[3270]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3270 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:23.152419 kernel: audit: type=1325 audit(1769452103.049:527): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3270 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:23.049000 audit[3270]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc57472a70 a2=0 a3=0 items=0 ppid=3000 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:23.049000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:23.219000 audit[3274]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3274 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:23.219000 audit[3274]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff9d9bf960 a2=0 a3=7fff9d9bf94c items=0 ppid=3000 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:23.219000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:23.236000 audit[3274]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3274 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:23.236000 audit[3274]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff9d9bf960 a2=0 a3=0 items=0 ppid=3000 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:23.236000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:23.316281 kubelet[2838]: E0126 18:28:23.314349 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:23.330120 containerd[1602]: time="2026-01-26T18:28:23.328295177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-575c6f599d-4tfcp,Uid:7f297736-8350-422b-aa67-c8422674c3f8,Namespace:calico-system,Attempt:0,}" Jan 26 18:28:23.369249 systemd[1]: Created slice kubepods-besteffort-pod9d373e84_8fd9_4a11_a600_c4bff5314d18.slice - libcontainer container kubepods-besteffort-pod9d373e84_8fd9_4a11_a600_c4bff5314d18.slice. Jan 26 18:28:23.421947 kubelet[2838]: I0126 18:28:23.421138 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9d373e84-8fd9-4a11-a600-c4bff5314d18-node-certs\") pod \"calico-node-87m2g\" (UID: \"9d373e84-8fd9-4a11-a600-c4bff5314d18\") " pod="calico-system/calico-node-87m2g" Jan 26 18:28:23.421947 kubelet[2838]: I0126 18:28:23.421172 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9d373e84-8fd9-4a11-a600-c4bff5314d18-policysync\") pod \"calico-node-87m2g\" (UID: \"9d373e84-8fd9-4a11-a600-c4bff5314d18\") " pod="calico-system/calico-node-87m2g" Jan 26 18:28:23.421947 kubelet[2838]: I0126 18:28:23.421188 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9d373e84-8fd9-4a11-a600-c4bff5314d18-cni-bin-dir\") pod \"calico-node-87m2g\" (UID: \"9d373e84-8fd9-4a11-a600-c4bff5314d18\") " pod="calico-system/calico-node-87m2g" Jan 26 18:28:23.421947 kubelet[2838]: I0126 18:28:23.421204 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9d373e84-8fd9-4a11-a600-c4bff5314d18-cni-net-dir\") pod \"calico-node-87m2g\" (UID: \"9d373e84-8fd9-4a11-a600-c4bff5314d18\") " pod="calico-system/calico-node-87m2g" Jan 26 18:28:23.421947 kubelet[2838]: I0126 18:28:23.421224 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9d373e84-8fd9-4a11-a600-c4bff5314d18-var-run-calico\") pod \"calico-node-87m2g\" (UID: \"9d373e84-8fd9-4a11-a600-c4bff5314d18\") " pod="calico-system/calico-node-87m2g" Jan 26 18:28:23.422218 kubelet[2838]: I0126 18:28:23.421237 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d373e84-8fd9-4a11-a600-c4bff5314d18-lib-modules\") pod \"calico-node-87m2g\" (UID: \"9d373e84-8fd9-4a11-a600-c4bff5314d18\") " pod="calico-system/calico-node-87m2g" Jan 26 18:28:23.422218 kubelet[2838]: I0126 18:28:23.421249 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zll28\" (UniqueName: \"kubernetes.io/projected/9d373e84-8fd9-4a11-a600-c4bff5314d18-kube-api-access-zll28\") pod \"calico-node-87m2g\" (UID: \"9d373e84-8fd9-4a11-a600-c4bff5314d18\") " pod="calico-system/calico-node-87m2g" Jan 26 18:28:23.422218 kubelet[2838]: I0126 18:28:23.421276 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9d373e84-8fd9-4a11-a600-c4bff5314d18-var-lib-calico\") pod \"calico-node-87m2g\" (UID: \"9d373e84-8fd9-4a11-a600-c4bff5314d18\") " pod="calico-system/calico-node-87m2g" Jan 26 18:28:23.422218 kubelet[2838]: I0126 18:28:23.421301 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d373e84-8fd9-4a11-a600-c4bff5314d18-xtables-lock\") pod \"calico-node-87m2g\" (UID: \"9d373e84-8fd9-4a11-a600-c4bff5314d18\") " pod="calico-system/calico-node-87m2g" Jan 26 18:28:23.422218 kubelet[2838]: I0126 18:28:23.421327 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9d373e84-8fd9-4a11-a600-c4bff5314d18-flexvol-driver-host\") pod \"calico-node-87m2g\" (UID: \"9d373e84-8fd9-4a11-a600-c4bff5314d18\") " pod="calico-system/calico-node-87m2g" Jan 26 18:28:23.422373 kubelet[2838]: I0126 18:28:23.421357 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9d373e84-8fd9-4a11-a600-c4bff5314d18-cni-log-dir\") pod \"calico-node-87m2g\" (UID: \"9d373e84-8fd9-4a11-a600-c4bff5314d18\") " pod="calico-system/calico-node-87m2g" Jan 26 18:28:23.422373 kubelet[2838]: I0126 18:28:23.421375 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d373e84-8fd9-4a11-a600-c4bff5314d18-tigera-ca-bundle\") pod \"calico-node-87m2g\" (UID: \"9d373e84-8fd9-4a11-a600-c4bff5314d18\") " pod="calico-system/calico-node-87m2g" Jan 26 18:28:23.443386 containerd[1602]: time="2026-01-26T18:28:23.443186536Z" level=info msg="connecting to shim 5f71cb0d428f8563c7c08728e86c0e01ab6e83db42d2c2159ff0d1652919c78e" address="unix:///run/containerd/s/3129d5c473313680ad6776857cc27d4afedd7ed7155f73c9301e7e0b2b1a2091" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:28:23.533947 kubelet[2838]: E0126 18:28:23.532997 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzr9m" podUID="e99188ce-3ac3-4524-8689-b68793ad3ef1" Jan 26 18:28:23.594390 kubelet[2838]: E0126 18:28:23.592349 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.594390 kubelet[2838]: W0126 18:28:23.592596 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.594390 kubelet[2838]: E0126 18:28:23.592625 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.597630 kubelet[2838]: E0126 18:28:23.597092 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.597630 kubelet[2838]: W0126 18:28:23.597230 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.597630 kubelet[2838]: E0126 18:28:23.597250 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.608213 systemd[1]: Started cri-containerd-5f71cb0d428f8563c7c08728e86c0e01ab6e83db42d2c2159ff0d1652919c78e.scope - libcontainer container 5f71cb0d428f8563c7c08728e86c0e01ab6e83db42d2c2159ff0d1652919c78e. Jan 26 18:28:23.609425 kubelet[2838]: E0126 18:28:23.609405 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.609425 kubelet[2838]: W0126 18:28:23.609420 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.609616 kubelet[2838]: E0126 18:28:23.609436 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.618374 kubelet[2838]: E0126 18:28:23.618166 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.618374 kubelet[2838]: W0126 18:28:23.618190 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.618374 kubelet[2838]: E0126 18:28:23.618214 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.620287 kubelet[2838]: E0126 18:28:23.619908 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.620287 kubelet[2838]: W0126 18:28:23.619921 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.620287 kubelet[2838]: E0126 18:28:23.619934 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.620388 kubelet[2838]: E0126 18:28:23.620312 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.620388 kubelet[2838]: W0126 18:28:23.620322 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.620388 kubelet[2838]: E0126 18:28:23.620332 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.621940 kubelet[2838]: E0126 18:28:23.621436 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.621940 kubelet[2838]: W0126 18:28:23.621448 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.621940 kubelet[2838]: E0126 18:28:23.621459 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.627968 kubelet[2838]: E0126 18:28:23.627312 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.627968 kubelet[2838]: W0126 18:28:23.627445 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.627968 kubelet[2838]: E0126 18:28:23.627461 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.643366 kubelet[2838]: E0126 18:28:23.642999 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.643366 kubelet[2838]: W0126 18:28:23.643135 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.643366 kubelet[2838]: E0126 18:28:23.643152 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.648959 kubelet[2838]: E0126 18:28:23.648695 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.648959 kubelet[2838]: W0126 18:28:23.648720 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.649250 kubelet[2838]: E0126 18:28:23.649110 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.650108 kubelet[2838]: E0126 18:28:23.649722 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.651113 kubelet[2838]: W0126 18:28:23.650732 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.651222 kubelet[2838]: E0126 18:28:23.651123 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.656150 kubelet[2838]: E0126 18:28:23.656009 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.656150 kubelet[2838]: W0126 18:28:23.656148 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.656267 kubelet[2838]: E0126 18:28:23.656172 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.660951 kubelet[2838]: E0126 18:28:23.660027 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.660951 kubelet[2838]: W0126 18:28:23.660043 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.660951 kubelet[2838]: E0126 18:28:23.660057 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.664908 kubelet[2838]: E0126 18:28:23.664438 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.664908 kubelet[2838]: W0126 18:28:23.664675 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.664908 kubelet[2838]: E0126 18:28:23.664691 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.670671 kubelet[2838]: E0126 18:28:23.670378 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.670671 kubelet[2838]: W0126 18:28:23.670616 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.670671 kubelet[2838]: E0126 18:28:23.670631 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.672929 kubelet[2838]: E0126 18:28:23.671167 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.672929 kubelet[2838]: W0126 18:28:23.671179 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.672929 kubelet[2838]: E0126 18:28:23.671194 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.672929 kubelet[2838]: E0126 18:28:23.671443 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.672929 kubelet[2838]: W0126 18:28:23.671451 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.672929 kubelet[2838]: E0126 18:28:23.671462 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.673263 kubelet[2838]: E0126 18:28:23.673128 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.673263 kubelet[2838]: W0126 18:28:23.673263 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.673356 kubelet[2838]: E0126 18:28:23.673276 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.678679 kubelet[2838]: E0126 18:28:23.678376 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.678679 kubelet[2838]: W0126 18:28:23.678608 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.678679 kubelet[2838]: E0126 18:28:23.678621 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.679303 kubelet[2838]: E0126 18:28:23.679158 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.679303 kubelet[2838]: W0126 18:28:23.679300 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.679364 kubelet[2838]: E0126 18:28:23.679315 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.681304 kubelet[2838]: E0126 18:28:23.680610 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.681304 kubelet[2838]: W0126 18:28:23.680726 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.681304 kubelet[2838]: E0126 18:28:23.680737 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.682697 kubelet[2838]: E0126 18:28:23.682407 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.682697 kubelet[2838]: W0126 18:28:23.682635 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.682697 kubelet[2838]: E0126 18:28:23.682645 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.689128 kubelet[2838]: E0126 18:28:23.688169 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.689128 kubelet[2838]: W0126 18:28:23.688186 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.689128 kubelet[2838]: E0126 18:28:23.688197 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.689128 kubelet[2838]: E0126 18:28:23.688434 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.689128 kubelet[2838]: W0126 18:28:23.688442 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.689128 kubelet[2838]: E0126 18:28:23.688451 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.689128 kubelet[2838]: E0126 18:28:23.689020 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.689128 kubelet[2838]: W0126 18:28:23.689029 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.689128 kubelet[2838]: E0126 18:28:23.689037 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.690244 kubelet[2838]: E0126 18:28:23.690124 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.690244 kubelet[2838]: W0126 18:28:23.690241 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.690326 kubelet[2838]: E0126 18:28:23.690252 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.690923 kubelet[2838]: E0126 18:28:23.690428 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.690923 kubelet[2838]: W0126 18:28:23.690437 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.690923 kubelet[2838]: E0126 18:28:23.690445 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.691160 kubelet[2838]: E0126 18:28:23.691104 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.691160 kubelet[2838]: W0126 18:28:23.691133 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.691160 kubelet[2838]: E0126 18:28:23.691141 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.691160 kubelet[2838]: I0126 18:28:23.691160 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e99188ce-3ac3-4524-8689-b68793ad3ef1-registration-dir\") pod \"csi-node-driver-gzr9m\" (UID: \"e99188ce-3ac3-4524-8689-b68793ad3ef1\") " pod="calico-system/csi-node-driver-gzr9m" Jan 26 18:28:23.693089 kubelet[2838]: E0126 18:28:23.692724 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.695085 kubelet[2838]: W0126 18:28:23.694854 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.695085 kubelet[2838]: E0126 18:28:23.694975 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.695294 kubelet[2838]: I0126 18:28:23.695159 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e99188ce-3ac3-4524-8689-b68793ad3ef1-kubelet-dir\") pod \"csi-node-driver-gzr9m\" (UID: \"e99188ce-3ac3-4524-8689-b68793ad3ef1\") " pod="calico-system/csi-node-driver-gzr9m" Jan 26 18:28:23.695568 kubelet[2838]: E0126 18:28:23.695356 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.695609 kubelet[2838]: W0126 18:28:23.695584 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.695609 kubelet[2838]: E0126 18:28:23.695596 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.701197 kubelet[2838]: E0126 18:28:23.701065 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.701197 kubelet[2838]: W0126 18:28:23.701195 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.701313 kubelet[2838]: E0126 18:28:23.701208 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.701688 kubelet[2838]: E0126 18:28:23.701402 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.701688 kubelet[2838]: W0126 18:28:23.701632 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.701688 kubelet[2838]: E0126 18:28:23.701643 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.702407 kubelet[2838]: E0126 18:28:23.702190 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.702407 kubelet[2838]: W0126 18:28:23.702316 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.702407 kubelet[2838]: E0126 18:28:23.702327 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.706723 kubelet[2838]: I0126 18:28:23.706406 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e99188ce-3ac3-4524-8689-b68793ad3ef1-varrun\") pod \"csi-node-driver-gzr9m\" (UID: \"e99188ce-3ac3-4524-8689-b68793ad3ef1\") " pod="calico-system/csi-node-driver-gzr9m" Jan 26 18:28:23.706723 kubelet[2838]: E0126 18:28:23.706668 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.706723 kubelet[2838]: W0126 18:28:23.706679 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.706723 kubelet[2838]: E0126 18:28:23.706691 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.709436 kubelet[2838]: E0126 18:28:23.709394 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.709436 kubelet[2838]: W0126 18:28:23.709409 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.709436 kubelet[2838]: E0126 18:28:23.709421 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.712124 kubelet[2838]: E0126 18:28:23.711678 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.712124 kubelet[2838]: W0126 18:28:23.712062 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.712124 kubelet[2838]: E0126 18:28:23.712076 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.716343 kubelet[2838]: E0126 18:28:23.715986 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.716343 kubelet[2838]: W0126 18:28:23.716099 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.716343 kubelet[2838]: E0126 18:28:23.716113 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.718143 kubelet[2838]: I0126 18:28:23.718002 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e99188ce-3ac3-4524-8689-b68793ad3ef1-socket-dir\") pod \"csi-node-driver-gzr9m\" (UID: \"e99188ce-3ac3-4524-8689-b68793ad3ef1\") " pod="calico-system/csi-node-driver-gzr9m" Jan 26 18:28:23.719074 kubelet[2838]: E0126 18:28:23.718689 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.719074 kubelet[2838]: W0126 18:28:23.718703 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.719074 kubelet[2838]: E0126 18:28:23.718715 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.720623 kubelet[2838]: E0126 18:28:23.720610 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.720705 kubelet[2838]: W0126 18:28:23.720688 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.721026 kubelet[2838]: E0126 18:28:23.721008 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.722725 kubelet[2838]: E0126 18:28:23.722433 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.722725 kubelet[2838]: W0126 18:28:23.722639 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.722725 kubelet[2838]: E0126 18:28:23.722651 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.725164 kubelet[2838]: E0126 18:28:23.725030 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:23.731159 kubelet[2838]: E0126 18:28:23.730618 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.731159 kubelet[2838]: W0126 18:28:23.730637 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.731159 kubelet[2838]: E0126 18:28:23.730656 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.732224 containerd[1602]: time="2026-01-26T18:28:23.731434650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-87m2g,Uid:9d373e84-8fd9-4a11-a600-c4bff5314d18,Namespace:calico-system,Attempt:0,}" Jan 26 18:28:23.735242 kubelet[2838]: E0126 18:28:23.735119 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.735292 kubelet[2838]: W0126 18:28:23.735243 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.735292 kubelet[2838]: E0126 18:28:23.735259 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.740016 kubelet[2838]: E0126 18:28:23.739224 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.740016 kubelet[2838]: W0126 18:28:23.739238 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.740016 kubelet[2838]: E0126 18:28:23.739250 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.836673 kubelet[2838]: E0126 18:28:23.836644 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.837355 kubelet[2838]: W0126 18:28:23.837337 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.837440 kubelet[2838]: E0126 18:28:23.837425 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.837000 audit: BPF prog-id=151 op=LOAD Jan 26 18:28:23.839000 audit: BPF prog-id=152 op=LOAD Jan 26 18:28:23.844264 kubelet[2838]: E0126 18:28:23.844130 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.844264 kubelet[2838]: W0126 18:28:23.844250 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.844330 kubelet[2838]: E0126 18:28:23.844268 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.839000 audit[3294]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3284 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:23.839000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566373163623064343238663835363363376330383732386538366330 Jan 26 18:28:23.844000 audit: BPF prog-id=152 op=UNLOAD Jan 26 18:28:23.844000 audit[3294]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3284 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:23.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566373163623064343238663835363363376330383732386538366330 Jan 26 18:28:23.847000 audit: BPF prog-id=153 op=LOAD Jan 26 18:28:23.847000 audit[3294]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3284 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:23.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566373163623064343238663835363363376330383732386538366330 Jan 26 18:28:23.847000 audit: BPF prog-id=154 op=LOAD Jan 26 18:28:23.847000 audit[3294]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3284 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:23.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566373163623064343238663835363363376330383732386538366330 Jan 26 18:28:23.848000 audit: BPF prog-id=154 op=UNLOAD Jan 26 18:28:23.848000 audit[3294]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3284 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:23.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566373163623064343238663835363363376330383732386538366330 Jan 26 18:28:23.848000 audit: BPF prog-id=153 op=UNLOAD Jan 26 18:28:23.848000 audit[3294]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3284 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:23.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566373163623064343238663835363363376330383732386538366330 Jan 26 18:28:23.848000 audit: BPF prog-id=155 op=LOAD Jan 26 18:28:23.848000 audit[3294]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3284 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:23.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566373163623064343238663835363363376330383732386538366330 Jan 26 18:28:23.855095 kubelet[2838]: E0126 18:28:23.855009 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.855095 kubelet[2838]: W0126 18:28:23.855031 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.855095 kubelet[2838]: E0126 18:28:23.855048 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.858308 kubelet[2838]: E0126 18:28:23.858177 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.858308 kubelet[2838]: W0126 18:28:23.858295 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.858308 kubelet[2838]: E0126 18:28:23.858311 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.866283 kubelet[2838]: E0126 18:28:23.865112 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.866283 kubelet[2838]: W0126 18:28:23.865221 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.866283 kubelet[2838]: E0126 18:28:23.865237 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.866383 kubelet[2838]: E0126 18:28:23.866330 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.866383 kubelet[2838]: W0126 18:28:23.866340 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.866383 kubelet[2838]: E0126 18:28:23.866352 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.868675 kubelet[2838]: E0126 18:28:23.868350 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.868675 kubelet[2838]: W0126 18:28:23.868457 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.868675 kubelet[2838]: E0126 18:28:23.868470 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.872197 kubelet[2838]: E0126 18:28:23.871670 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.872197 kubelet[2838]: W0126 18:28:23.872018 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.872197 kubelet[2838]: E0126 18:28:23.872032 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.876023 kubelet[2838]: E0126 18:28:23.874297 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.876023 kubelet[2838]: W0126 18:28:23.874309 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.876023 kubelet[2838]: E0126 18:28:23.874682 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.876144 containerd[1602]: time="2026-01-26T18:28:23.874651296Z" level=info msg="connecting to shim 8bc983d0040ab65e47143725d648d4b592ee5ca0288856625cd3438f15597be9" address="unix:///run/containerd/s/7c5b30107dc51001d8d8357078d7abdcdb23a0596faef0ecab61a89f94296d49" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:28:23.878063 kubelet[2838]: E0126 18:28:23.877934 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.878063 kubelet[2838]: W0126 18:28:23.877948 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.878063 kubelet[2838]: E0126 18:28:23.877958 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.878712 kubelet[2838]: E0126 18:28:23.878192 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.878712 kubelet[2838]: W0126 18:28:23.878300 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.878712 kubelet[2838]: E0126 18:28:23.878315 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.882292 kubelet[2838]: E0126 18:28:23.882042 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.882292 kubelet[2838]: W0126 18:28:23.882061 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.882292 kubelet[2838]: E0126 18:28:23.882078 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.885285 kubelet[2838]: E0126 18:28:23.884234 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.885285 kubelet[2838]: W0126 18:28:23.884246 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.885285 kubelet[2838]: E0126 18:28:23.884257 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.885285 kubelet[2838]: I0126 18:28:23.885038 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cfhw\" (UniqueName: \"kubernetes.io/projected/e99188ce-3ac3-4524-8689-b68793ad3ef1-kube-api-access-5cfhw\") pod \"csi-node-driver-gzr9m\" (UID: \"e99188ce-3ac3-4524-8689-b68793ad3ef1\") " pod="calico-system/csi-node-driver-gzr9m" Jan 26 18:28:23.890084 kubelet[2838]: E0126 18:28:23.890066 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.890159 kubelet[2838]: W0126 18:28:23.890147 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.890224 kubelet[2838]: E0126 18:28:23.890207 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.892333 kubelet[2838]: E0126 18:28:23.892318 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.892420 kubelet[2838]: W0126 18:28:23.892399 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.892625 kubelet[2838]: E0126 18:28:23.892607 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.893223 kubelet[2838]: E0126 18:28:23.893210 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.893281 kubelet[2838]: W0126 18:28:23.893271 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.894061 kubelet[2838]: E0126 18:28:23.894044 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.894432 kubelet[2838]: E0126 18:28:23.894417 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.894636 kubelet[2838]: W0126 18:28:23.894620 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.894701 kubelet[2838]: E0126 18:28:23.894688 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.897731 kubelet[2838]: E0126 18:28:23.897400 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.897731 kubelet[2838]: W0126 18:28:23.897613 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.897731 kubelet[2838]: E0126 18:28:23.897628 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.900876 kubelet[2838]: E0126 18:28:23.900237 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.900876 kubelet[2838]: W0126 18:28:23.900356 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.900876 kubelet[2838]: E0126 18:28:23.900370 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.903205 kubelet[2838]: E0126 18:28:23.902665 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.903205 kubelet[2838]: W0126 18:28:23.903075 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.903205 kubelet[2838]: E0126 18:28:23.903096 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.906235 kubelet[2838]: E0126 18:28:23.905715 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.906235 kubelet[2838]: W0126 18:28:23.905729 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.906235 kubelet[2838]: E0126 18:28:23.905953 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.908138 kubelet[2838]: E0126 18:28:23.907586 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.908138 kubelet[2838]: W0126 18:28:23.907599 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.908138 kubelet[2838]: E0126 18:28:23.907608 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.911050 kubelet[2838]: E0126 18:28:23.910382 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.911050 kubelet[2838]: W0126 18:28:23.910396 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.911050 kubelet[2838]: E0126 18:28:23.910408 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.988693 kubelet[2838]: E0126 18:28:23.988442 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.989871 kubelet[2838]: W0126 18:28:23.989229 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.989871 kubelet[2838]: E0126 18:28:23.989374 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.992323 kubelet[2838]: E0126 18:28:23.991646 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.992323 kubelet[2838]: W0126 18:28:23.992075 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.992323 kubelet[2838]: E0126 18:28:23.992098 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.995221 kubelet[2838]: E0126 18:28:23.995186 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.995221 kubelet[2838]: W0126 18:28:23.995198 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.995221 kubelet[2838]: E0126 18:28:23.995211 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.997112 kubelet[2838]: E0126 18:28:23.996589 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.998707 kubelet[2838]: W0126 18:28:23.998128 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.998707 kubelet[2838]: E0126 18:28:23.998260 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:23.998707 kubelet[2838]: E0126 18:28:23.998677 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:23.998707 kubelet[2838]: W0126 18:28:23.998688 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:23.998707 kubelet[2838]: E0126 18:28:23.998700 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:24.069361 kubelet[2838]: E0126 18:28:24.069110 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:24.070320 kubelet[2838]: W0126 18:28:24.069652 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:24.070320 kubelet[2838]: E0126 18:28:24.069681 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:24.096647 systemd[1]: Started cri-containerd-8bc983d0040ab65e47143725d648d4b592ee5ca0288856625cd3438f15597be9.scope - libcontainer container 8bc983d0040ab65e47143725d648d4b592ee5ca0288856625cd3438f15597be9. Jan 26 18:28:24.160000 audit: BPF prog-id=156 op=LOAD Jan 26 18:28:24.163000 audit: BPF prog-id=157 op=LOAD Jan 26 18:28:24.163000 audit[3413]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3383 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:24.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862633938336430303430616236356534373134333732356436343864 Jan 26 18:28:24.164000 audit: BPF prog-id=157 op=UNLOAD Jan 26 18:28:24.164000 audit[3413]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3383 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:24.164000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862633938336430303430616236356534373134333732356436343864 Jan 26 18:28:24.165000 audit: BPF prog-id=158 op=LOAD Jan 26 18:28:24.165000 audit[3413]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3383 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:24.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862633938336430303430616236356534373134333732356436343864 Jan 26 18:28:24.168000 audit: BPF prog-id=159 op=LOAD Jan 26 18:28:24.168000 audit[3413]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3383 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:24.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862633938336430303430616236356534373134333732356436343864 Jan 26 18:28:24.169000 audit: BPF prog-id=159 op=UNLOAD Jan 26 18:28:24.169000 audit[3413]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3383 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:24.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862633938336430303430616236356534373134333732356436343864 Jan 26 18:28:24.170000 audit: BPF prog-id=158 op=UNLOAD Jan 26 18:28:24.170000 audit[3413]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3383 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:24.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862633938336430303430616236356534373134333732356436343864 Jan 26 18:28:24.170000 audit: BPF prog-id=160 op=LOAD Jan 26 18:28:24.170000 audit[3413]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3383 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:24.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862633938336430303430616236356534373134333732356436343864 Jan 26 18:28:24.187996 containerd[1602]: time="2026-01-26T18:28:24.184666064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-575c6f599d-4tfcp,Uid:7f297736-8350-422b-aa67-c8422674c3f8,Namespace:calico-system,Attempt:0,} returns sandbox id \"5f71cb0d428f8563c7c08728e86c0e01ab6e83db42d2c2159ff0d1652919c78e\"" Jan 26 18:28:24.214296 kubelet[2838]: E0126 18:28:24.212462 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:24.233280 containerd[1602]: time="2026-01-26T18:28:24.233215348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 26 18:28:24.335427 containerd[1602]: time="2026-01-26T18:28:24.335301717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-87m2g,Uid:9d373e84-8fd9-4a11-a600-c4bff5314d18,Namespace:calico-system,Attempt:0,} returns sandbox id \"8bc983d0040ab65e47143725d648d4b592ee5ca0288856625cd3438f15597be9\"" Jan 26 18:28:24.339013 kubelet[2838]: E0126 18:28:24.338226 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:25.443720 kubelet[2838]: E0126 18:28:25.443455 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzr9m" podUID="e99188ce-3ac3-4524-8689-b68793ad3ef1" Jan 26 18:28:25.480339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1169270270.mount: Deactivated successfully. Jan 26 18:28:27.446167 kubelet[2838]: E0126 18:28:27.444720 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzr9m" podUID="e99188ce-3ac3-4524-8689-b68793ad3ef1" Jan 26 18:28:29.230290 containerd[1602]: time="2026-01-26T18:28:29.230235932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:28:29.232267 containerd[1602]: time="2026-01-26T18:28:29.232101559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 26 18:28:29.236674 containerd[1602]: time="2026-01-26T18:28:29.236631498Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:28:29.245572 containerd[1602]: time="2026-01-26T18:28:29.245379990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:28:29.247034 containerd[1602]: time="2026-01-26T18:28:29.246441357Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 5.013183799s" Jan 26 18:28:29.247034 containerd[1602]: time="2026-01-26T18:28:29.247002440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 26 18:28:29.255040 containerd[1602]: time="2026-01-26T18:28:29.251352714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 26 18:28:29.304726 containerd[1602]: time="2026-01-26T18:28:29.304265274Z" level=info msg="CreateContainer within sandbox \"5f71cb0d428f8563c7c08728e86c0e01ab6e83db42d2c2159ff0d1652919c78e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 26 18:28:29.338344 containerd[1602]: time="2026-01-26T18:28:29.338297494Z" level=info msg="Container e45053c4c95d9e08ffbf512dd196ef66644fdf1e6b685f73ddea9224d1cf2abe: CDI devices from CRI Config.CDIDevices: []" Jan 26 18:28:29.365732 containerd[1602]: time="2026-01-26T18:28:29.365340435Z" level=info msg="CreateContainer within sandbox \"5f71cb0d428f8563c7c08728e86c0e01ab6e83db42d2c2159ff0d1652919c78e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e45053c4c95d9e08ffbf512dd196ef66644fdf1e6b685f73ddea9224d1cf2abe\"" Jan 26 18:28:29.381084 containerd[1602]: time="2026-01-26T18:28:29.373212640Z" level=info msg="StartContainer for \"e45053c4c95d9e08ffbf512dd196ef66644fdf1e6b685f73ddea9224d1cf2abe\"" Jan 26 18:28:29.389110 containerd[1602]: time="2026-01-26T18:28:29.388616719Z" level=info msg="connecting to shim e45053c4c95d9e08ffbf512dd196ef66644fdf1e6b685f73ddea9224d1cf2abe" address="unix:///run/containerd/s/3129d5c473313680ad6776857cc27d4afedd7ed7155f73c9301e7e0b2b1a2091" protocol=ttrpc version=3 Jan 26 18:28:29.439731 kubelet[2838]: E0126 18:28:29.439088 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzr9m" podUID="e99188ce-3ac3-4524-8689-b68793ad3ef1" Jan 26 18:28:29.471110 systemd[1]: Started cri-containerd-e45053c4c95d9e08ffbf512dd196ef66644fdf1e6b685f73ddea9224d1cf2abe.scope - libcontainer container e45053c4c95d9e08ffbf512dd196ef66644fdf1e6b685f73ddea9224d1cf2abe. Jan 26 18:28:29.525000 audit: BPF prog-id=161 op=LOAD Jan 26 18:28:29.541003 kernel: kauditd_printk_skb: 52 callbacks suppressed Jan 26 18:28:29.541085 kernel: audit: type=1334 audit(1769452109.525:546): prog-id=161 op=LOAD Jan 26 18:28:29.526000 audit: BPF prog-id=162 op=LOAD Jan 26 18:28:29.564349 kernel: audit: type=1334 audit(1769452109.526:547): prog-id=162 op=LOAD Jan 26 18:28:29.564408 kernel: audit: type=1300 audit(1769452109.526:547): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3284 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:29.526000 audit[3465]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3284 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:29.526000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534353035336334633935643965303866666266353132646431393665 Jan 26 18:28:29.668042 kernel: audit: type=1327 audit(1769452109.526:547): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534353035336334633935643965303866666266353132646431393665 Jan 26 18:28:29.527000 audit: BPF prog-id=162 op=UNLOAD Jan 26 18:28:29.685031 kernel: audit: type=1334 audit(1769452109.527:548): prog-id=162 op=UNLOAD Jan 26 18:28:29.685125 kernel: audit: type=1300 audit(1769452109.527:548): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3284 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:29.527000 audit[3465]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3284 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:29.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534353035336334633935643965303866666266353132646431393665 Jan 26 18:28:29.784299 kernel: audit: type=1327 audit(1769452109.527:548): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534353035336334633935643965303866666266353132646431393665 Jan 26 18:28:29.784434 kernel: audit: type=1334 audit(1769452109.527:549): prog-id=163 op=LOAD Jan 26 18:28:29.527000 audit: BPF prog-id=163 op=LOAD Jan 26 18:28:29.797722 kernel: audit: type=1300 audit(1769452109.527:549): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3284 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:29.527000 audit[3465]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3284 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:29.841312 containerd[1602]: time="2026-01-26T18:28:29.840692244Z" level=info msg="StartContainer for \"e45053c4c95d9e08ffbf512dd196ef66644fdf1e6b685f73ddea9224d1cf2abe\" returns successfully" Jan 26 18:28:29.857072 kernel: audit: type=1327 audit(1769452109.527:549): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534353035336334633935643965303866666266353132646431393665 Jan 26 18:28:29.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534353035336334633935643965303866666266353132646431393665 Jan 26 18:28:29.527000 audit: BPF prog-id=164 op=LOAD Jan 26 18:28:29.527000 audit[3465]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3284 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:29.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534353035336334633935643965303866666266353132646431393665 Jan 26 18:28:29.527000 audit: BPF prog-id=164 op=UNLOAD Jan 26 18:28:29.527000 audit[3465]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3284 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:29.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534353035336334633935643965303866666266353132646431393665 Jan 26 18:28:29.527000 audit: BPF prog-id=163 op=UNLOAD Jan 26 18:28:29.527000 audit[3465]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3284 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:29.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534353035336334633935643965303866666266353132646431393665 Jan 26 18:28:29.527000 audit: BPF prog-id=165 op=LOAD Jan 26 18:28:29.527000 audit[3465]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3284 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:29.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534353035336334633935643965303866666266353132646431393665 Jan 26 18:28:30.392557 containerd[1602]: time="2026-01-26T18:28:30.392256086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:28:30.399175 containerd[1602]: time="2026-01-26T18:28:30.398882059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 26 18:28:30.402045 containerd[1602]: time="2026-01-26T18:28:30.401671734Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:28:30.421120 containerd[1602]: time="2026-01-26T18:28:30.420419280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:28:30.422037 containerd[1602]: time="2026-01-26T18:28:30.421702628Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.166482511s" Jan 26 18:28:30.422378 containerd[1602]: time="2026-01-26T18:28:30.421734708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 26 18:28:30.457658 containerd[1602]: time="2026-01-26T18:28:30.457083053Z" level=info msg="CreateContainer within sandbox \"8bc983d0040ab65e47143725d648d4b592ee5ca0288856625cd3438f15597be9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 26 18:28:30.507290 containerd[1602]: time="2026-01-26T18:28:30.506364113Z" level=info msg="Container 540ffff52a214bd650f89d19320e70a6401214fdaa6b5f89a9a38c5edebd3075: CDI devices from CRI Config.CDIDevices: []" Jan 26 18:28:30.553085 containerd[1602]: time="2026-01-26T18:28:30.552262054Z" level=info msg="CreateContainer within sandbox \"8bc983d0040ab65e47143725d648d4b592ee5ca0288856625cd3438f15597be9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"540ffff52a214bd650f89d19320e70a6401214fdaa6b5f89a9a38c5edebd3075\"" Jan 26 18:28:30.558678 containerd[1602]: time="2026-01-26T18:28:30.558072175Z" level=info msg="StartContainer for \"540ffff52a214bd650f89d19320e70a6401214fdaa6b5f89a9a38c5edebd3075\"" Jan 26 18:28:30.569128 containerd[1602]: time="2026-01-26T18:28:30.568110350Z" level=info msg="connecting to shim 540ffff52a214bd650f89d19320e70a6401214fdaa6b5f89a9a38c5edebd3075" address="unix:///run/containerd/s/7c5b30107dc51001d8d8357078d7abdcdb23a0596faef0ecab61a89f94296d49" protocol=ttrpc version=3 Jan 26 18:28:30.690173 systemd[1]: Started cri-containerd-540ffff52a214bd650f89d19320e70a6401214fdaa6b5f89a9a38c5edebd3075.scope - libcontainer container 540ffff52a214bd650f89d19320e70a6401214fdaa6b5f89a9a38c5edebd3075. Jan 26 18:28:30.806000 audit: BPF prog-id=166 op=LOAD Jan 26 18:28:30.806000 audit[3505]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3383 pid=3505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:30.806000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534306666666635326132313462643635306638396431393332306537 Jan 26 18:28:30.806000 audit: BPF prog-id=167 op=LOAD Jan 26 18:28:30.806000 audit[3505]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3383 pid=3505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:30.806000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534306666666635326132313462643635306638396431393332306537 Jan 26 18:28:30.806000 audit: BPF prog-id=167 op=UNLOAD Jan 26 18:28:30.806000 audit[3505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3383 pid=3505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:30.806000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534306666666635326132313462643635306638396431393332306537 Jan 26 18:28:30.806000 audit: BPF prog-id=166 op=UNLOAD Jan 26 18:28:30.806000 audit[3505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3383 pid=3505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:30.806000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534306666666635326132313462643635306638396431393332306537 Jan 26 18:28:30.806000 audit: BPF prog-id=168 op=LOAD Jan 26 18:28:30.806000 audit[3505]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3383 pid=3505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:30.806000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534306666666635326132313462643635306638396431393332306537 Jan 26 18:28:30.871236 kubelet[2838]: E0126 18:28:30.869661 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:30.950200 kubelet[2838]: E0126 18:28:30.949287 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:30.950200 kubelet[2838]: W0126 18:28:30.949422 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:30.950200 kubelet[2838]: E0126 18:28:30.949567 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:30.952226 kubelet[2838]: E0126 18:28:30.951336 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:30.952226 kubelet[2838]: W0126 18:28:30.951562 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:30.952226 kubelet[2838]: E0126 18:28:30.951584 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:30.953570 kubelet[2838]: E0126 18:28:30.953176 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:30.953570 kubelet[2838]: W0126 18:28:30.953298 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:30.953570 kubelet[2838]: E0126 18:28:30.953315 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:30.957390 kubelet[2838]: E0126 18:28:30.956121 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:30.957390 kubelet[2838]: W0126 18:28:30.956138 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:30.957390 kubelet[2838]: E0126 18:28:30.956157 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:30.959060 kubelet[2838]: I0126 18:28:30.958212 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-575c6f599d-4tfcp" podStartSLOduration=3.92743192 podStartE2EDuration="8.958197788s" podCreationTimestamp="2026-01-26 18:28:22 +0000 UTC" firstStartedPulling="2026-01-26 18:28:24.219301523 +0000 UTC m=+31.287269854" lastFinishedPulling="2026-01-26 18:28:29.250067391 +0000 UTC m=+36.318035722" observedRunningTime="2026-01-26 18:28:30.907588895 +0000 UTC m=+37.975557276" watchObservedRunningTime="2026-01-26 18:28:30.958197788 +0000 UTC m=+38.026166119" Jan 26 18:28:30.963116 kubelet[2838]: E0126 18:28:30.962016 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:30.963116 kubelet[2838]: W0126 18:28:30.962145 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:30.963116 kubelet[2838]: E0126 18:28:30.962162 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:30.964658 kubelet[2838]: E0126 18:28:30.964172 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:30.964658 kubelet[2838]: W0126 18:28:30.964185 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:30.964658 kubelet[2838]: E0126 18:28:30.964199 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:30.966108 kubelet[2838]: E0126 18:28:30.965164 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:30.966108 kubelet[2838]: W0126 18:28:30.965283 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:30.966108 kubelet[2838]: E0126 18:28:30.965299 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:30.967302 kubelet[2838]: E0126 18:28:30.966659 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:30.967302 kubelet[2838]: W0126 18:28:30.967091 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:30.967302 kubelet[2838]: E0126 18:28:30.967105 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:30.968972 kubelet[2838]: E0126 18:28:30.968140 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:30.968972 kubelet[2838]: W0126 18:28:30.968283 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:30.968972 kubelet[2838]: E0126 18:28:30.968297 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:30.973155 kubelet[2838]: E0126 18:28:30.970604 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:30.973155 kubelet[2838]: W0126 18:28:30.970617 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:30.973155 kubelet[2838]: E0126 18:28:30.970629 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:30.973298 containerd[1602]: time="2026-01-26T18:28:30.971699947Z" level=info msg="StartContainer for \"540ffff52a214bd650f89d19320e70a6401214fdaa6b5f89a9a38c5edebd3075\" returns successfully" Jan 26 18:28:30.980737 kubelet[2838]: E0126 18:28:30.973579 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:30.980737 kubelet[2838]: W0126 18:28:30.973694 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:30.980737 kubelet[2838]: E0126 18:28:30.973708 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:30.980737 kubelet[2838]: E0126 18:28:30.980321 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:30.980737 kubelet[2838]: W0126 18:28:30.980340 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:30.980737 kubelet[2838]: E0126 18:28:30.980356 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:30.983957 kubelet[2838]: E0126 18:28:30.983587 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:30.983957 kubelet[2838]: W0126 18:28:30.983718 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:30.983957 kubelet[2838]: E0126 18:28:30.983733 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:30.988710 kubelet[2838]: E0126 18:28:30.986687 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:30.988710 kubelet[2838]: W0126 18:28:30.986701 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:30.988710 kubelet[2838]: E0126 18:28:30.986714 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:30.994391 kubelet[2838]: E0126 18:28:30.993212 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:30.994391 kubelet[2838]: W0126 18:28:30.993230 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:30.994391 kubelet[2838]: E0126 18:28:30.993246 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:31.029062 kubelet[2838]: E0126 18:28:31.026310 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:31.029062 kubelet[2838]: W0126 18:28:31.026339 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:31.029062 kubelet[2838]: E0126 18:28:31.027052 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:31.031668 kubelet[2838]: E0126 18:28:31.029350 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:31.031668 kubelet[2838]: W0126 18:28:31.029361 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:31.031668 kubelet[2838]: E0126 18:28:31.029379 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:31.034411 kubelet[2838]: E0126 18:28:31.033184 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:31.034411 kubelet[2838]: W0126 18:28:31.033322 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:31.034411 kubelet[2838]: E0126 18:28:31.033340 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:31.034411 kubelet[2838]: E0126 18:28:31.034405 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:31.036356 kubelet[2838]: W0126 18:28:31.034417 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:31.036356 kubelet[2838]: E0126 18:28:31.034430 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:31.036356 kubelet[2838]: E0126 18:28:31.035120 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:31.036356 kubelet[2838]: W0126 18:28:31.035136 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:31.036356 kubelet[2838]: E0126 18:28:31.035149 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:31.038118 kubelet[2838]: E0126 18:28:31.037419 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:31.038118 kubelet[2838]: W0126 18:28:31.037430 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:31.038118 kubelet[2838]: E0126 18:28:31.037572 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:31.040295 kubelet[2838]: E0126 18:28:31.040240 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:31.040295 kubelet[2838]: W0126 18:28:31.040256 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:31.040295 kubelet[2838]: E0126 18:28:31.040268 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:31.042068 kubelet[2838]: E0126 18:28:31.041239 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:31.042068 kubelet[2838]: W0126 18:28:31.041374 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:31.042068 kubelet[2838]: E0126 18:28:31.041387 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:31.044266 kubelet[2838]: E0126 18:28:31.042204 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:31.044266 kubelet[2838]: W0126 18:28:31.042217 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:31.044266 kubelet[2838]: E0126 18:28:31.042228 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:31.044393 kubelet[2838]: E0126 18:28:31.044348 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:31.044393 kubelet[2838]: W0126 18:28:31.044358 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:31.044393 kubelet[2838]: E0126 18:28:31.044371 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:31.050334 kubelet[2838]: E0126 18:28:31.050098 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:31.050334 kubelet[2838]: W0126 18:28:31.050226 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:31.050334 kubelet[2838]: E0126 18:28:31.050242 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:31.053183 kubelet[2838]: E0126 18:28:31.051696 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:31.053183 kubelet[2838]: W0126 18:28:31.053170 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:31.053183 kubelet[2838]: E0126 18:28:31.053186 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:31.054080 kubelet[2838]: E0126 18:28:31.053674 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:31.054133 kubelet[2838]: W0126 18:28:31.054118 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:31.054182 kubelet[2838]: E0126 18:28:31.054132 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:31.056971 kubelet[2838]: E0126 18:28:31.054593 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:31.056971 kubelet[2838]: W0126 18:28:31.054715 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:31.056971 kubelet[2838]: E0126 18:28:31.054728 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:31.056971 kubelet[2838]: E0126 18:28:31.055386 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:31.056971 kubelet[2838]: W0126 18:28:31.055398 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:31.056971 kubelet[2838]: E0126 18:28:31.055411 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:31.056971 kubelet[2838]: E0126 18:28:31.056614 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:31.056971 kubelet[2838]: W0126 18:28:31.056625 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:31.056971 kubelet[2838]: E0126 18:28:31.056637 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:31.058281 kubelet[2838]: E0126 18:28:31.057725 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:31.058281 kubelet[2838]: W0126 18:28:31.058106 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:31.058281 kubelet[2838]: E0126 18:28:31.058120 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:31.061901 kubelet[2838]: E0126 18:28:31.061128 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:28:31.061901 kubelet[2838]: W0126 18:28:31.061273 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:28:31.061901 kubelet[2838]: E0126 18:28:31.061287 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:28:31.076000 audit[3570]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3570 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:31.076000 audit[3570]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcd166f950 a2=0 a3=7ffcd166f93c items=0 ppid=3000 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:31.076000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:31.082245 systemd[1]: cri-containerd-540ffff52a214bd650f89d19320e70a6401214fdaa6b5f89a9a38c5edebd3075.scope: Deactivated successfully. Jan 26 18:28:31.085000 audit[3570]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3570 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:31.085000 audit[3570]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffcd166f950 a2=0 a3=7ffcd166f93c items=0 ppid=3000 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:31.085000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:31.089000 audit: BPF prog-id=168 op=UNLOAD Jan 26 18:28:31.129560 containerd[1602]: time="2026-01-26T18:28:31.129116765Z" level=info msg="received container exit event container_id:\"540ffff52a214bd650f89d19320e70a6401214fdaa6b5f89a9a38c5edebd3075\" id:\"540ffff52a214bd650f89d19320e70a6401214fdaa6b5f89a9a38c5edebd3075\" pid:3517 exited_at:{seconds:1769452111 nanos:126995387}" Jan 26 18:28:31.269196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-540ffff52a214bd650f89d19320e70a6401214fdaa6b5f89a9a38c5edebd3075-rootfs.mount: Deactivated successfully. Jan 26 18:28:31.437408 kubelet[2838]: E0126 18:28:31.437267 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzr9m" podUID="e99188ce-3ac3-4524-8689-b68793ad3ef1" Jan 26 18:28:31.884105 kubelet[2838]: E0126 18:28:31.882370 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:31.884105 kubelet[2838]: E0126 18:28:31.882403 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:31.886311 containerd[1602]: time="2026-01-26T18:28:31.886078528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 26 18:28:32.886360 kubelet[2838]: E0126 18:28:32.886130 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:33.437560 kubelet[2838]: E0126 18:28:33.437235 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzr9m" podUID="e99188ce-3ac3-4524-8689-b68793ad3ef1" Jan 26 18:28:35.452325 kubelet[2838]: E0126 18:28:35.452186 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzr9m" podUID="e99188ce-3ac3-4524-8689-b68793ad3ef1" Jan 26 18:28:36.414870 containerd[1602]: time="2026-01-26T18:28:36.414516055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:28:36.416878 containerd[1602]: time="2026-01-26T18:28:36.416633207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 26 18:28:36.418884 containerd[1602]: time="2026-01-26T18:28:36.418623696Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:28:36.422838 containerd[1602]: time="2026-01-26T18:28:36.422695938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:28:36.424180 containerd[1602]: time="2026-01-26T18:28:36.423687753Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.537571645s" Jan 26 18:28:36.424180 containerd[1602]: time="2026-01-26T18:28:36.423993681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 26 18:28:36.434975 containerd[1602]: time="2026-01-26T18:28:36.434573151Z" level=info msg="CreateContainer within sandbox \"8bc983d0040ab65e47143725d648d4b592ee5ca0288856625cd3438f15597be9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 26 18:28:36.452611 containerd[1602]: time="2026-01-26T18:28:36.452383806Z" level=info msg="Container 4088ad1ae5b56642ab62156e43df2164c4a51954b279833af0c0704e64e4e951: CDI devices from CRI Config.CDIDevices: []" Jan 26 18:28:36.471483 containerd[1602]: time="2026-01-26T18:28:36.471291910Z" level=info msg="CreateContainer within sandbox \"8bc983d0040ab65e47143725d648d4b592ee5ca0288856625cd3438f15597be9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4088ad1ae5b56642ab62156e43df2164c4a51954b279833af0c0704e64e4e951\"" Jan 26 18:28:36.472527 containerd[1602]: time="2026-01-26T18:28:36.472498648Z" level=info msg="StartContainer for \"4088ad1ae5b56642ab62156e43df2164c4a51954b279833af0c0704e64e4e951\"" Jan 26 18:28:36.476632 containerd[1602]: time="2026-01-26T18:28:36.475868887Z" level=info msg="connecting to shim 4088ad1ae5b56642ab62156e43df2164c4a51954b279833af0c0704e64e4e951" address="unix:///run/containerd/s/7c5b30107dc51001d8d8357078d7abdcdb23a0596faef0ecab61a89f94296d49" protocol=ttrpc version=3 Jan 26 18:28:36.536159 systemd[1]: Started cri-containerd-4088ad1ae5b56642ab62156e43df2164c4a51954b279833af0c0704e64e4e951.scope - libcontainer container 4088ad1ae5b56642ab62156e43df2164c4a51954b279833af0c0704e64e4e951. Jan 26 18:28:36.629000 audit: BPF prog-id=169 op=LOAD Jan 26 18:28:36.637955 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 26 18:28:36.638058 kernel: audit: type=1334 audit(1769452116.629:562): prog-id=169 op=LOAD Jan 26 18:28:36.629000 audit[3610]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3383 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:36.664109 kernel: audit: type=1300 audit(1769452116.629:562): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3383 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:36.664200 kernel: audit: type=1327 audit(1769452116.629:562): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430383861643161653562353636343261623632313536653433646632 Jan 26 18:28:36.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430383861643161653562353636343261623632313536653433646632 Jan 26 18:28:36.629000 audit: BPF prog-id=170 op=LOAD Jan 26 18:28:36.688962 kernel: audit: type=1334 audit(1769452116.629:563): prog-id=170 op=LOAD Jan 26 18:28:36.689074 kernel: audit: type=1300 audit(1769452116.629:563): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3383 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:36.629000 audit[3610]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3383 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:36.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430383861643161653562353636343261623632313536653433646632 Jan 26 18:28:36.734098 kernel: audit: type=1327 audit(1769452116.629:563): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430383861643161653562353636343261623632313536653433646632 Jan 26 18:28:36.734200 kernel: audit: type=1334 audit(1769452116.629:564): prog-id=170 op=UNLOAD Jan 26 18:28:36.629000 audit: BPF prog-id=170 op=UNLOAD Jan 26 18:28:36.740209 kernel: audit: type=1300 audit(1769452116.629:564): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3383 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:36.629000 audit[3610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3383 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:36.751026 containerd[1602]: time="2026-01-26T18:28:36.750298059Z" level=info msg="StartContainer for \"4088ad1ae5b56642ab62156e43df2164c4a51954b279833af0c0704e64e4e951\" returns successfully" Jan 26 18:28:36.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430383861643161653562353636343261623632313536653433646632 Jan 26 18:28:36.786401 kernel: audit: type=1327 audit(1769452116.629:564): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430383861643161653562353636343261623632313536653433646632 Jan 26 18:28:36.786580 kernel: audit: type=1334 audit(1769452116.629:565): prog-id=169 op=UNLOAD Jan 26 18:28:36.629000 audit: BPF prog-id=169 op=UNLOAD Jan 26 18:28:36.629000 audit[3610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3383 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:36.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430383861643161653562353636343261623632313536653433646632 Jan 26 18:28:36.629000 audit: BPF prog-id=171 op=LOAD Jan 26 18:28:36.629000 audit[3610]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3383 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:36.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430383861643161653562353636343261623632313536653433646632 Jan 26 18:28:36.914566 kubelet[2838]: E0126 18:28:36.914243 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:37.438397 kubelet[2838]: E0126 18:28:37.437396 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzr9m" podUID="e99188ce-3ac3-4524-8689-b68793ad3ef1" Jan 26 18:28:37.602918 systemd[1]: cri-containerd-4088ad1ae5b56642ab62156e43df2164c4a51954b279833af0c0704e64e4e951.scope: Deactivated successfully. Jan 26 18:28:37.604635 systemd[1]: cri-containerd-4088ad1ae5b56642ab62156e43df2164c4a51954b279833af0c0704e64e4e951.scope: Consumed 1.024s CPU time, 173.5M memory peak, 3.8M read from disk, 171.3M written to disk. Jan 26 18:28:37.608718 containerd[1602]: time="2026-01-26T18:28:37.608261941Z" level=info msg="received container exit event container_id:\"4088ad1ae5b56642ab62156e43df2164c4a51954b279833af0c0704e64e4e951\" id:\"4088ad1ae5b56642ab62156e43df2164c4a51954b279833af0c0704e64e4e951\" pid:3623 exited_at:{seconds:1769452117 nanos:604297916}" Jan 26 18:28:37.608000 audit: BPF prog-id=171 op=UNLOAD Jan 26 18:28:37.699903 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4088ad1ae5b56642ab62156e43df2164c4a51954b279833af0c0704e64e4e951-rootfs.mount: Deactivated successfully. Jan 26 18:28:37.814628 kubelet[2838]: I0126 18:28:37.814533 2838 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 26 18:28:37.883490 systemd[1]: Created slice kubepods-burstable-podff4e0446_b364_4ad8_9ef7_bce278402973.slice - libcontainer container kubepods-burstable-podff4e0446_b364_4ad8_9ef7_bce278402973.slice. Jan 26 18:28:37.906139 systemd[1]: Created slice kubepods-burstable-pod2234becc_c720_462c_a720_e90ca2659bbd.slice - libcontainer container kubepods-burstable-pod2234becc_c720_462c_a720_e90ca2659bbd.slice. Jan 26 18:28:37.928005 kubelet[2838]: E0126 18:28:37.927678 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:37.929193 systemd[1]: Created slice kubepods-besteffort-podcceca80f_f824_4908_8b3f_ad347497224d.slice - libcontainer container kubepods-besteffort-podcceca80f_f824_4908_8b3f_ad347497224d.slice. Jan 26 18:28:37.931953 containerd[1602]: time="2026-01-26T18:28:37.931595408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 26 18:28:37.932029 kubelet[2838]: I0126 18:28:37.929891 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqcn4\" (UniqueName: \"kubernetes.io/projected/fb70354b-2e8e-4b1e-823d-0f04eedecec2-kube-api-access-vqcn4\") pod \"goldmane-666569f655-8zw8p\" (UID: \"fb70354b-2e8e-4b1e-823d-0f04eedecec2\") " pod="calico-system/goldmane-666569f655-8zw8p" Jan 26 18:28:37.932029 kubelet[2838]: I0126 18:28:37.929915 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cceca80f-f824-4908-8b3f-ad347497224d-whisker-backend-key-pair\") pod \"whisker-7b7fcc6c46-b7fpn\" (UID: \"cceca80f-f824-4908-8b3f-ad347497224d\") " pod="calico-system/whisker-7b7fcc6c46-b7fpn" Jan 26 18:28:37.932029 kubelet[2838]: I0126 18:28:37.929933 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5q9c\" (UniqueName: \"kubernetes.io/projected/cceca80f-f824-4908-8b3f-ad347497224d-kube-api-access-b5q9c\") pod \"whisker-7b7fcc6c46-b7fpn\" (UID: \"cceca80f-f824-4908-8b3f-ad347497224d\") " pod="calico-system/whisker-7b7fcc6c46-b7fpn" Jan 26 18:28:37.932029 kubelet[2838]: I0126 18:28:37.929950 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4jf5\" (UniqueName: \"kubernetes.io/projected/ff4e0446-b364-4ad8-9ef7-bce278402973-kube-api-access-n4jf5\") pod \"coredns-674b8bbfcf-2x6sp\" (UID: \"ff4e0446-b364-4ad8-9ef7-bce278402973\") " pod="kube-system/coredns-674b8bbfcf-2x6sp" Jan 26 18:28:37.932029 kubelet[2838]: I0126 18:28:37.929964 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cceca80f-f824-4908-8b3f-ad347497224d-whisker-ca-bundle\") pod \"whisker-7b7fcc6c46-b7fpn\" (UID: \"cceca80f-f824-4908-8b3f-ad347497224d\") " pod="calico-system/whisker-7b7fcc6c46-b7fpn" Jan 26 18:28:37.932184 kubelet[2838]: I0126 18:28:37.929978 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb70354b-2e8e-4b1e-823d-0f04eedecec2-config\") pod \"goldmane-666569f655-8zw8p\" (UID: \"fb70354b-2e8e-4b1e-823d-0f04eedecec2\") " pod="calico-system/goldmane-666569f655-8zw8p" Jan 26 18:28:37.932184 kubelet[2838]: I0126 18:28:37.929993 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2234becc-c720-462c-a720-e90ca2659bbd-config-volume\") pod \"coredns-674b8bbfcf-8vs84\" (UID: \"2234becc-c720-462c-a720-e90ca2659bbd\") " pod="kube-system/coredns-674b8bbfcf-8vs84" Jan 26 18:28:37.932184 kubelet[2838]: I0126 18:28:37.930009 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb70354b-2e8e-4b1e-823d-0f04eedecec2-goldmane-ca-bundle\") pod \"goldmane-666569f655-8zw8p\" (UID: \"fb70354b-2e8e-4b1e-823d-0f04eedecec2\") " pod="calico-system/goldmane-666569f655-8zw8p" Jan 26 18:28:37.932184 kubelet[2838]: I0126 18:28:37.930023 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fb70354b-2e8e-4b1e-823d-0f04eedecec2-goldmane-key-pair\") pod \"goldmane-666569f655-8zw8p\" (UID: \"fb70354b-2e8e-4b1e-823d-0f04eedecec2\") " pod="calico-system/goldmane-666569f655-8zw8p" Jan 26 18:28:37.932184 kubelet[2838]: I0126 18:28:37.930038 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zspnm\" (UniqueName: \"kubernetes.io/projected/2234becc-c720-462c-a720-e90ca2659bbd-kube-api-access-zspnm\") pod \"coredns-674b8bbfcf-8vs84\" (UID: \"2234becc-c720-462c-a720-e90ca2659bbd\") " pod="kube-system/coredns-674b8bbfcf-8vs84" Jan 26 18:28:37.932289 kubelet[2838]: I0126 18:28:37.930055 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff4e0446-b364-4ad8-9ef7-bce278402973-config-volume\") pod \"coredns-674b8bbfcf-2x6sp\" (UID: \"ff4e0446-b364-4ad8-9ef7-bce278402973\") " pod="kube-system/coredns-674b8bbfcf-2x6sp" Jan 26 18:28:37.943148 systemd[1]: Created slice kubepods-besteffort-podfb70354b_2e8e_4b1e_823d_0f04eedecec2.slice - libcontainer container kubepods-besteffort-podfb70354b_2e8e_4b1e_823d_0f04eedecec2.slice. Jan 26 18:28:37.960978 systemd[1]: Created slice kubepods-besteffort-podbcefd4f3_4cd3_4d24_b71b_627a7a3ce855.slice - libcontainer container kubepods-besteffort-podbcefd4f3_4cd3_4d24_b71b_627a7a3ce855.slice. Jan 26 18:28:37.972876 systemd[1]: Created slice kubepods-besteffort-pod5f250e57_76e7_4282_9d3b_aa7149c84f3a.slice - libcontainer container kubepods-besteffort-pod5f250e57_76e7_4282_9d3b_aa7149c84f3a.slice. Jan 26 18:28:37.988334 systemd[1]: Created slice kubepods-besteffort-pod7718a3ef_224e_406c_b2ab_a63644f74c0b.slice - libcontainer container kubepods-besteffort-pod7718a3ef_224e_406c_b2ab_a63644f74c0b.slice. Jan 26 18:28:38.030544 kubelet[2838]: I0126 18:28:38.030390 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7718a3ef-224e-406c-b2ab-a63644f74c0b-calico-apiserver-certs\") pod \"calico-apiserver-6c8ccd88f4-6fjn2\" (UID: \"7718a3ef-224e-406c-b2ab-a63644f74c0b\") " pod="calico-apiserver/calico-apiserver-6c8ccd88f4-6fjn2" Jan 26 18:28:38.030656 kubelet[2838]: I0126 18:28:38.030586 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm9gk\" (UniqueName: \"kubernetes.io/projected/5f250e57-76e7-4282-9d3b-aa7149c84f3a-kube-api-access-vm9gk\") pod \"calico-kube-controllers-56d588489d-lsq6l\" (UID: \"5f250e57-76e7-4282-9d3b-aa7149c84f3a\") " pod="calico-system/calico-kube-controllers-56d588489d-lsq6l" Jan 26 18:28:38.030656 kubelet[2838]: I0126 18:28:38.030628 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6pl2\" (UniqueName: \"kubernetes.io/projected/7718a3ef-224e-406c-b2ab-a63644f74c0b-kube-api-access-p6pl2\") pod \"calico-apiserver-6c8ccd88f4-6fjn2\" (UID: \"7718a3ef-224e-406c-b2ab-a63644f74c0b\") " pod="calico-apiserver/calico-apiserver-6c8ccd88f4-6fjn2" Jan 26 18:28:38.030656 kubelet[2838]: I0126 18:28:38.030642 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f250e57-76e7-4282-9d3b-aa7149c84f3a-tigera-ca-bundle\") pod \"calico-kube-controllers-56d588489d-lsq6l\" (UID: \"5f250e57-76e7-4282-9d3b-aa7149c84f3a\") " pod="calico-system/calico-kube-controllers-56d588489d-lsq6l" Jan 26 18:28:38.030913 kubelet[2838]: I0126 18:28:38.030663 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bcefd4f3-4cd3-4d24-b71b-627a7a3ce855-calico-apiserver-certs\") pod \"calico-apiserver-6c8ccd88f4-dbcn4\" (UID: \"bcefd4f3-4cd3-4d24-b71b-627a7a3ce855\") " pod="calico-apiserver/calico-apiserver-6c8ccd88f4-dbcn4" Jan 26 18:28:38.030913 kubelet[2838]: I0126 18:28:38.030695 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfdrn\" (UniqueName: \"kubernetes.io/projected/bcefd4f3-4cd3-4d24-b71b-627a7a3ce855-kube-api-access-lfdrn\") pod \"calico-apiserver-6c8ccd88f4-dbcn4\" (UID: \"bcefd4f3-4cd3-4d24-b71b-627a7a3ce855\") " pod="calico-apiserver/calico-apiserver-6c8ccd88f4-dbcn4" Jan 26 18:28:38.213264 kubelet[2838]: E0126 18:28:38.213077 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:38.215668 containerd[1602]: time="2026-01-26T18:28:38.215595376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2x6sp,Uid:ff4e0446-b364-4ad8-9ef7-bce278402973,Namespace:kube-system,Attempt:0,}" Jan 26 18:28:38.219953 kubelet[2838]: E0126 18:28:38.219843 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:38.220351 containerd[1602]: time="2026-01-26T18:28:38.220283352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8vs84,Uid:2234becc-c720-462c-a720-e90ca2659bbd,Namespace:kube-system,Attempt:0,}" Jan 26 18:28:38.239017 containerd[1602]: time="2026-01-26T18:28:38.238498522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b7fcc6c46-b7fpn,Uid:cceca80f-f824-4908-8b3f-ad347497224d,Namespace:calico-system,Attempt:0,}" Jan 26 18:28:38.253311 containerd[1602]: time="2026-01-26T18:28:38.253055032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8zw8p,Uid:fb70354b-2e8e-4b1e-823d-0f04eedecec2,Namespace:calico-system,Attempt:0,}" Jan 26 18:28:38.272573 containerd[1602]: time="2026-01-26T18:28:38.272182792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8ccd88f4-dbcn4,Uid:bcefd4f3-4cd3-4d24-b71b-627a7a3ce855,Namespace:calico-apiserver,Attempt:0,}" Jan 26 18:28:38.292109 containerd[1602]: time="2026-01-26T18:28:38.292025476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56d588489d-lsq6l,Uid:5f250e57-76e7-4282-9d3b-aa7149c84f3a,Namespace:calico-system,Attempt:0,}" Jan 26 18:28:38.314328 containerd[1602]: time="2026-01-26T18:28:38.314211750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8ccd88f4-6fjn2,Uid:7718a3ef-224e-406c-b2ab-a63644f74c0b,Namespace:calico-apiserver,Attempt:0,}" Jan 26 18:28:38.532267 containerd[1602]: time="2026-01-26T18:28:38.531514773Z" level=error msg="Failed to destroy network for sandbox \"aca87b292b0ee88f5cba3ae2204336c51d23a782ab8b808213c99ed91a665fac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:28:38.534201 containerd[1602]: time="2026-01-26T18:28:38.533985548Z" level=error msg="Failed to destroy network for sandbox \"d3b5e09828dc0d740bce038ef54ab5157fae92f79c24c24e70b7ec8e2ee4e0d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:28:38.544913 containerd[1602]: time="2026-01-26T18:28:38.544258240Z" level=error msg="Failed to destroy network for sandbox \"0a6fc587991b210701c870eb3f231df83f577369a32257dcfe63642f5377a98e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:28:38.545202 containerd[1602]: time="2026-01-26T18:28:38.544923271Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8vs84,Uid:2234becc-c720-462c-a720-e90ca2659bbd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aca87b292b0ee88f5cba3ae2204336c51d23a782ab8b808213c99ed91a665fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:28:38.545378 kubelet[2838]: E0126 18:28:38.545339 2838 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aca87b292b0ee88f5cba3ae2204336c51d23a782ab8b808213c99ed91a665fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:28:38.545503 kubelet[2838]: E0126 18:28:38.545462 2838 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aca87b292b0ee88f5cba3ae2204336c51d23a782ab8b808213c99ed91a665fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8vs84" Jan 26 18:28:38.545503 kubelet[2838]: E0126 18:28:38.545490 2838 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aca87b292b0ee88f5cba3ae2204336c51d23a782ab8b808213c99ed91a665fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8vs84" Jan 26 18:28:38.545554 containerd[1602]: time="2026-01-26T18:28:38.545367497Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b7fcc6c46-b7fpn,Uid:cceca80f-f824-4908-8b3f-ad347497224d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3b5e09828dc0d740bce038ef54ab5157fae92f79c24c24e70b7ec8e2ee4e0d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:28:38.545696 kubelet[2838]: E0126 18:28:38.545534 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-8vs84_kube-system(2234becc-c720-462c-a720-e90ca2659bbd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-8vs84_kube-system(2234becc-c720-462c-a720-e90ca2659bbd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aca87b292b0ee88f5cba3ae2204336c51d23a782ab8b808213c99ed91a665fac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8vs84" podUID="2234becc-c720-462c-a720-e90ca2659bbd" Jan 26 18:28:38.546941 kubelet[2838]: E0126 18:28:38.546867 2838 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3b5e09828dc0d740bce038ef54ab5157fae92f79c24c24e70b7ec8e2ee4e0d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:28:38.546941 kubelet[2838]: E0126 18:28:38.546903 2838 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3b5e09828dc0d740bce038ef54ab5157fae92f79c24c24e70b7ec8e2ee4e0d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b7fcc6c46-b7fpn" Jan 26 18:28:38.546941 kubelet[2838]: E0126 18:28:38.546920 2838 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3b5e09828dc0d740bce038ef54ab5157fae92f79c24c24e70b7ec8e2ee4e0d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b7fcc6c46-b7fpn" Jan 26 18:28:38.547035 kubelet[2838]: E0126 18:28:38.546953 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b7fcc6c46-b7fpn_calico-system(cceca80f-f824-4908-8b3f-ad347497224d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b7fcc6c46-b7fpn_calico-system(cceca80f-f824-4908-8b3f-ad347497224d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3b5e09828dc0d740bce038ef54ab5157fae92f79c24c24e70b7ec8e2ee4e0d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b7fcc6c46-b7fpn" podUID="cceca80f-f824-4908-8b3f-ad347497224d" Jan 26 18:28:38.550707 containerd[1602]: time="2026-01-26T18:28:38.550337680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2x6sp,Uid:ff4e0446-b364-4ad8-9ef7-bce278402973,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a6fc587991b210701c870eb3f231df83f577369a32257dcfe63642f5377a98e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:28:38.551492 kubelet[2838]: E0126 18:28:38.550610 2838 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a6fc587991b210701c870eb3f231df83f577369a32257dcfe63642f5377a98e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:28:38.551492 kubelet[2838]: E0126 18:28:38.550639 2838 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a6fc587991b210701c870eb3f231df83f577369a32257dcfe63642f5377a98e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2x6sp" Jan 26 18:28:38.551492 kubelet[2838]: E0126 18:28:38.550654 2838 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a6fc587991b210701c870eb3f231df83f577369a32257dcfe63642f5377a98e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2x6sp" Jan 26 18:28:38.551673 kubelet[2838]: E0126 18:28:38.550858 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2x6sp_kube-system(ff4e0446-b364-4ad8-9ef7-bce278402973)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2x6sp_kube-system(ff4e0446-b364-4ad8-9ef7-bce278402973)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a6fc587991b210701c870eb3f231df83f577369a32257dcfe63642f5377a98e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2x6sp" podUID="ff4e0446-b364-4ad8-9ef7-bce278402973" Jan 26 18:28:38.556936 containerd[1602]: time="2026-01-26T18:28:38.556849461Z" level=error msg="Failed to destroy network for sandbox \"b676bf8090f4a5adbdc52187f5a469e61981d0505c0bc180d5b4706e5e219501\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:28:38.564184 containerd[1602]: time="2026-01-26T18:28:38.564062314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8zw8p,Uid:fb70354b-2e8e-4b1e-823d-0f04eedecec2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b676bf8090f4a5adbdc52187f5a469e61981d0505c0bc180d5b4706e5e219501\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:28:38.565114 kubelet[2838]: E0126 18:28:38.565017 2838 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b676bf8090f4a5adbdc52187f5a469e61981d0505c0bc180d5b4706e5e219501\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:28:38.565114 kubelet[2838]: E0126 18:28:38.565068 2838 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b676bf8090f4a5adbdc52187f5a469e61981d0505c0bc180d5b4706e5e219501\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-8zw8p" Jan 26 18:28:38.565114 kubelet[2838]: E0126 18:28:38.565096 2838 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b676bf8090f4a5adbdc52187f5a469e61981d0505c0bc180d5b4706e5e219501\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-8zw8p" Jan 26 18:28:38.565231 kubelet[2838]: E0126 18:28:38.565154 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-8zw8p_calico-system(fb70354b-2e8e-4b1e-823d-0f04eedecec2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-8zw8p_calico-system(fb70354b-2e8e-4b1e-823d-0f04eedecec2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b676bf8090f4a5adbdc52187f5a469e61981d0505c0bc180d5b4706e5e219501\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-8zw8p" podUID="fb70354b-2e8e-4b1e-823d-0f04eedecec2" Jan 26 18:28:38.574961 containerd[1602]: time="2026-01-26T18:28:38.574857704Z" level=error msg="Failed to destroy network for sandbox \"a5f15d931c3efb7221e54893a9e622f4627c3a67452d8714b599cef5c2a0d9ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:28:38.578617 containerd[1602]: time="2026-01-26T18:28:38.578516972Z" level=error msg="Failed to destroy network for sandbox \"4b52dd41cc0753fd6d6b816914c7f1a3d1efcbca0d8aa21cb122c904a3e7de58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:28:38.582071 containerd[1602]: time="2026-01-26T18:28:38.581998484Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8ccd88f4-6fjn2,Uid:7718a3ef-224e-406c-b2ab-a63644f74c0b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f15d931c3efb7221e54893a9e622f4627c3a67452d8714b599cef5c2a0d9ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:28:38.583866 kubelet[2838]: E0126 18:28:38.582969 2838 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f15d931c3efb7221e54893a9e622f4627c3a67452d8714b599cef5c2a0d9ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:28:38.583866 kubelet[2838]: E0126 18:28:38.583010 2838 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f15d931c3efb7221e54893a9e622f4627c3a67452d8714b599cef5c2a0d9ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-6fjn2" Jan 26 18:28:38.583866 kubelet[2838]: E0126 18:28:38.583026 2838 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f15d931c3efb7221e54893a9e622f4627c3a67452d8714b599cef5c2a0d9ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-6fjn2" Jan 26 18:28:38.584159 kubelet[2838]: E0126 18:28:38.583066 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c8ccd88f4-6fjn2_calico-apiserver(7718a3ef-224e-406c-b2ab-a63644f74c0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c8ccd88f4-6fjn2_calico-apiserver(7718a3ef-224e-406c-b2ab-a63644f74c0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5f15d931c3efb7221e54893a9e622f4627c3a67452d8714b599cef5c2a0d9ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-6fjn2" podUID="7718a3ef-224e-406c-b2ab-a63644f74c0b" Jan 26 18:28:38.591154 containerd[1602]: time="2026-01-26T18:28:38.591020829Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8ccd88f4-dbcn4,Uid:bcefd4f3-4cd3-4d24-b71b-627a7a3ce855,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b52dd41cc0753fd6d6b816914c7f1a3d1efcbca0d8aa21cb122c904a3e7de58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:28:38.591686 kubelet[2838]: E0126 18:28:38.591548 2838 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b52dd41cc0753fd6d6b816914c7f1a3d1efcbca0d8aa21cb122c904a3e7de58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:28:38.591686 kubelet[2838]: E0126 18:28:38.591585 2838 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b52dd41cc0753fd6d6b816914c7f1a3d1efcbca0d8aa21cb122c904a3e7de58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-dbcn4" Jan 26 18:28:38.591686 kubelet[2838]: E0126 18:28:38.591603 2838 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b52dd41cc0753fd6d6b816914c7f1a3d1efcbca0d8aa21cb122c904a3e7de58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-dbcn4" Jan 26 18:28:38.591913 kubelet[2838]: E0126 18:28:38.591636 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c8ccd88f4-dbcn4_calico-apiserver(bcefd4f3-4cd3-4d24-b71b-627a7a3ce855)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c8ccd88f4-dbcn4_calico-apiserver(bcefd4f3-4cd3-4d24-b71b-627a7a3ce855)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b52dd41cc0753fd6d6b816914c7f1a3d1efcbca0d8aa21cb122c904a3e7de58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-dbcn4" podUID="bcefd4f3-4cd3-4d24-b71b-627a7a3ce855" Jan 26 18:28:38.598942 containerd[1602]: time="2026-01-26T18:28:38.598698198Z" level=error msg="Failed to destroy network for sandbox \"7fab863278689940d46e8864e666d9d5f18f01af3f8c3f9bf62b1fba6c183936\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:28:38.602248 containerd[1602]: time="2026-01-26T18:28:38.602154609Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56d588489d-lsq6l,Uid:5f250e57-76e7-4282-9d3b-aa7149c84f3a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fab863278689940d46e8864e666d9d5f18f01af3f8c3f9bf62b1fba6c183936\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:28:38.602818 kubelet[2838]: E0126 18:28:38.602581 2838 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fab863278689940d46e8864e666d9d5f18f01af3f8c3f9bf62b1fba6c183936\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:28:38.602818 kubelet[2838]: E0126 18:28:38.602672 2838 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fab863278689940d46e8864e666d9d5f18f01af3f8c3f9bf62b1fba6c183936\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56d588489d-lsq6l" Jan 26 18:28:38.602818 kubelet[2838]: E0126 18:28:38.602689 2838 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fab863278689940d46e8864e666d9d5f18f01af3f8c3f9bf62b1fba6c183936\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56d588489d-lsq6l" Jan 26 18:28:38.602933 kubelet[2838]: E0126 18:28:38.602912 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-56d588489d-lsq6l_calico-system(5f250e57-76e7-4282-9d3b-aa7149c84f3a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-56d588489d-lsq6l_calico-system(5f250e57-76e7-4282-9d3b-aa7149c84f3a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fab863278689940d46e8864e666d9d5f18f01af3f8c3f9bf62b1fba6c183936\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56d588489d-lsq6l" podUID="5f250e57-76e7-4282-9d3b-aa7149c84f3a" Jan 26 18:28:39.461369 systemd[1]: Created slice kubepods-besteffort-pode99188ce_3ac3_4524_8689_b68793ad3ef1.slice - libcontainer container kubepods-besteffort-pode99188ce_3ac3_4524_8689_b68793ad3ef1.slice. Jan 26 18:28:39.466251 containerd[1602]: time="2026-01-26T18:28:39.466176273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gzr9m,Uid:e99188ce-3ac3-4524-8689-b68793ad3ef1,Namespace:calico-system,Attempt:0,}" Jan 26 18:28:39.578310 containerd[1602]: time="2026-01-26T18:28:39.577997035Z" level=error msg="Failed to destroy network for sandbox \"9925af32fd7a983cc23baf280ce1306f02ce40e1a10b1aacdd6824e3019997a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:28:39.581124 systemd[1]: run-netns-cni\x2d01f4acaf\x2d342d\x2d15e4\x2d3df6\x2d403dfa532cae.mount: Deactivated successfully. Jan 26 18:28:39.596681 containerd[1602]: time="2026-01-26T18:28:39.595352911Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gzr9m,Uid:e99188ce-3ac3-4524-8689-b68793ad3ef1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9925af32fd7a983cc23baf280ce1306f02ce40e1a10b1aacdd6824e3019997a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:28:39.597953 kubelet[2838]: E0126 18:28:39.597911 2838 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9925af32fd7a983cc23baf280ce1306f02ce40e1a10b1aacdd6824e3019997a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:28:39.598614 kubelet[2838]: E0126 18:28:39.598484 2838 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9925af32fd7a983cc23baf280ce1306f02ce40e1a10b1aacdd6824e3019997a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gzr9m" Jan 26 18:28:39.598614 kubelet[2838]: E0126 18:28:39.598586 2838 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9925af32fd7a983cc23baf280ce1306f02ce40e1a10b1aacdd6824e3019997a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gzr9m" Jan 26 18:28:39.599027 kubelet[2838]: E0126 18:28:39.598651 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gzr9m_calico-system(e99188ce-3ac3-4524-8689-b68793ad3ef1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gzr9m_calico-system(e99188ce-3ac3-4524-8689-b68793ad3ef1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9925af32fd7a983cc23baf280ce1306f02ce40e1a10b1aacdd6824e3019997a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gzr9m" podUID="e99188ce-3ac3-4524-8689-b68793ad3ef1" Jan 26 18:28:45.950418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2177440355.mount: Deactivated successfully. Jan 26 18:28:46.164524 containerd[1602]: time="2026-01-26T18:28:46.164304046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:28:46.195520 containerd[1602]: time="2026-01-26T18:28:46.195319557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 26 18:28:46.197904 containerd[1602]: time="2026-01-26T18:28:46.197612060Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:28:46.201939 containerd[1602]: time="2026-01-26T18:28:46.201650219Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:28:46.202862 containerd[1602]: time="2026-01-26T18:28:46.202631992Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.270998283s" Jan 26 18:28:46.202862 containerd[1602]: time="2026-01-26T18:28:46.202660785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 26 18:28:46.240030 containerd[1602]: time="2026-01-26T18:28:46.239268392Z" level=info msg="CreateContainer within sandbox \"8bc983d0040ab65e47143725d648d4b592ee5ca0288856625cd3438f15597be9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 26 18:28:46.258270 containerd[1602]: time="2026-01-26T18:28:46.257952634Z" level=info msg="Container 88b0a2332bb7a617ffac8adf32a6a9a03bf41f2bada4d8a7a3db64f9240cbf62: CDI devices from CRI Config.CDIDevices: []" Jan 26 18:28:46.279310 containerd[1602]: time="2026-01-26T18:28:46.279124441Z" level=info msg="CreateContainer within sandbox \"8bc983d0040ab65e47143725d648d4b592ee5ca0288856625cd3438f15597be9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"88b0a2332bb7a617ffac8adf32a6a9a03bf41f2bada4d8a7a3db64f9240cbf62\"" Jan 26 18:28:46.280102 containerd[1602]: time="2026-01-26T18:28:46.280066582Z" level=info msg="StartContainer for \"88b0a2332bb7a617ffac8adf32a6a9a03bf41f2bada4d8a7a3db64f9240cbf62\"" Jan 26 18:28:46.282200 containerd[1602]: time="2026-01-26T18:28:46.282046474Z" level=info msg="connecting to shim 88b0a2332bb7a617ffac8adf32a6a9a03bf41f2bada4d8a7a3db64f9240cbf62" address="unix:///run/containerd/s/7c5b30107dc51001d8d8357078d7abdcdb23a0596faef0ecab61a89f94296d49" protocol=ttrpc version=3 Jan 26 18:28:46.326300 systemd[1]: Started cri-containerd-88b0a2332bb7a617ffac8adf32a6a9a03bf41f2bada4d8a7a3db64f9240cbf62.scope - libcontainer container 88b0a2332bb7a617ffac8adf32a6a9a03bf41f2bada4d8a7a3db64f9240cbf62. Jan 26 18:28:46.438000 audit: BPF prog-id=172 op=LOAD Jan 26 18:28:46.444486 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 26 18:28:46.444557 kernel: audit: type=1334 audit(1769452126.438:568): prog-id=172 op=LOAD Jan 26 18:28:46.438000 audit[3930]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3383 pid=3930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:46.472672 kernel: audit: type=1300 audit(1769452126.438:568): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3383 pid=3930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:46.472946 kernel: audit: type=1327 audit(1769452126.438:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838623061323333326262376136313766666163386164663332613661 Jan 26 18:28:46.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838623061323333326262376136313766666163386164663332613661 Jan 26 18:28:46.439000 audit: BPF prog-id=173 op=LOAD Jan 26 18:28:46.502153 kernel: audit: type=1334 audit(1769452126.439:569): prog-id=173 op=LOAD Jan 26 18:28:46.502232 kernel: audit: type=1300 audit(1769452126.439:569): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3383 pid=3930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:46.439000 audit[3930]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3383 pid=3930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:46.439000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838623061323333326262376136313766666163386164663332613661 Jan 26 18:28:46.551989 kernel: audit: type=1327 audit(1769452126.439:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838623061323333326262376136313766666163386164663332613661 Jan 26 18:28:46.552187 kernel: audit: type=1334 audit(1769452126.439:570): prog-id=173 op=UNLOAD Jan 26 18:28:46.439000 audit: BPF prog-id=173 op=UNLOAD Jan 26 18:28:46.557991 kernel: audit: type=1300 audit(1769452126.439:570): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3383 pid=3930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:46.439000 audit[3930]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3383 pid=3930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:46.568954 containerd[1602]: time="2026-01-26T18:28:46.568445282Z" level=info msg="StartContainer for \"88b0a2332bb7a617ffac8adf32a6a9a03bf41f2bada4d8a7a3db64f9240cbf62\" returns successfully" Jan 26 18:28:46.439000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838623061323333326262376136313766666163386164663332613661 Jan 26 18:28:46.595991 kernel: audit: type=1327 audit(1769452126.439:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838623061323333326262376136313766666163386164663332613661 Jan 26 18:28:46.601462 kernel: audit: type=1334 audit(1769452126.439:571): prog-id=172 op=UNLOAD Jan 26 18:28:46.439000 audit: BPF prog-id=172 op=UNLOAD Jan 26 18:28:46.439000 audit[3930]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3383 pid=3930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:46.439000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838623061323333326262376136313766666163386164663332613661 Jan 26 18:28:46.439000 audit: BPF prog-id=174 op=LOAD Jan 26 18:28:46.439000 audit[3930]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3383 pid=3930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:46.439000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838623061323333326262376136313766666163386164663332613661 Jan 26 18:28:46.841639 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 26 18:28:46.841936 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 26 18:28:46.996687 kubelet[2838]: E0126 18:28:46.996447 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:47.056228 kubelet[2838]: I0126 18:28:47.056075 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-87m2g" podStartSLOduration=2.191552626 podStartE2EDuration="24.056054519s" podCreationTimestamp="2026-01-26 18:28:23 +0000 UTC" firstStartedPulling="2026-01-26 18:28:24.339683467 +0000 UTC m=+31.407651798" lastFinishedPulling="2026-01-26 18:28:46.20418536 +0000 UTC m=+53.272153691" observedRunningTime="2026-01-26 18:28:47.048737302 +0000 UTC m=+54.116705634" watchObservedRunningTime="2026-01-26 18:28:47.056054519 +0000 UTC m=+54.124022850" Jan 26 18:28:47.238492 kubelet[2838]: I0126 18:28:47.238278 2838 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5q9c\" (UniqueName: \"kubernetes.io/projected/cceca80f-f824-4908-8b3f-ad347497224d-kube-api-access-b5q9c\") pod \"cceca80f-f824-4908-8b3f-ad347497224d\" (UID: \"cceca80f-f824-4908-8b3f-ad347497224d\") " Jan 26 18:28:47.238492 kubelet[2838]: I0126 18:28:47.238461 2838 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cceca80f-f824-4908-8b3f-ad347497224d-whisker-ca-bundle\") pod \"cceca80f-f824-4908-8b3f-ad347497224d\" (UID: \"cceca80f-f824-4908-8b3f-ad347497224d\") " Jan 26 18:28:47.238650 kubelet[2838]: I0126 18:28:47.238487 2838 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cceca80f-f824-4908-8b3f-ad347497224d-whisker-backend-key-pair\") pod \"cceca80f-f824-4908-8b3f-ad347497224d\" (UID: \"cceca80f-f824-4908-8b3f-ad347497224d\") " Jan 26 18:28:47.239905 kubelet[2838]: I0126 18:28:47.239514 2838 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cceca80f-f824-4908-8b3f-ad347497224d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cceca80f-f824-4908-8b3f-ad347497224d" (UID: "cceca80f-f824-4908-8b3f-ad347497224d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 18:28:47.252273 kubelet[2838]: I0126 18:28:47.251932 2838 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cceca80f-f824-4908-8b3f-ad347497224d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cceca80f-f824-4908-8b3f-ad347497224d" (UID: "cceca80f-f824-4908-8b3f-ad347497224d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 18:28:47.254925 systemd[1]: var-lib-kubelet-pods-cceca80f\x2df824\x2d4908\x2d8b3f\x2dad347497224d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 26 18:28:47.262861 kubelet[2838]: I0126 18:28:47.260557 2838 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cceca80f-f824-4908-8b3f-ad347497224d-kube-api-access-b5q9c" (OuterVolumeSpecName: "kube-api-access-b5q9c") pod "cceca80f-f824-4908-8b3f-ad347497224d" (UID: "cceca80f-f824-4908-8b3f-ad347497224d"). InnerVolumeSpecName "kube-api-access-b5q9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 18:28:47.262236 systemd[1]: var-lib-kubelet-pods-cceca80f\x2df824\x2d4908\x2d8b3f\x2dad347497224d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db5q9c.mount: Deactivated successfully. Jan 26 18:28:47.339292 kubelet[2838]: I0126 18:28:47.339167 2838 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b5q9c\" (UniqueName: \"kubernetes.io/projected/cceca80f-f824-4908-8b3f-ad347497224d-kube-api-access-b5q9c\") on node \"localhost\" DevicePath \"\"" Jan 26 18:28:47.339292 kubelet[2838]: I0126 18:28:47.339267 2838 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cceca80f-f824-4908-8b3f-ad347497224d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 26 18:28:47.339292 kubelet[2838]: I0126 18:28:47.339278 2838 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cceca80f-f824-4908-8b3f-ad347497224d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 26 18:28:47.466854 systemd[1]: Removed slice kubepods-besteffort-podcceca80f_f824_4908_8b3f_ad347497224d.slice - libcontainer container kubepods-besteffort-podcceca80f_f824_4908_8b3f_ad347497224d.slice. Jan 26 18:28:48.005913 kubelet[2838]: E0126 18:28:48.004903 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:48.213303 systemd[1]: Created slice kubepods-besteffort-pod17a5ef19_3fa0_4382_add3_2ce06b88ea33.slice - libcontainer container kubepods-besteffort-pod17a5ef19_3fa0_4382_add3_2ce06b88ea33.slice. Jan 26 18:28:48.348998 kubelet[2838]: I0126 18:28:48.348052 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17a5ef19-3fa0-4382-add3-2ce06b88ea33-whisker-ca-bundle\") pod \"whisker-64656d67bd-dn9xb\" (UID: \"17a5ef19-3fa0-4382-add3-2ce06b88ea33\") " pod="calico-system/whisker-64656d67bd-dn9xb" Jan 26 18:28:48.348998 kubelet[2838]: I0126 18:28:48.348127 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmnl5\" (UniqueName: \"kubernetes.io/projected/17a5ef19-3fa0-4382-add3-2ce06b88ea33-kube-api-access-pmnl5\") pod \"whisker-64656d67bd-dn9xb\" (UID: \"17a5ef19-3fa0-4382-add3-2ce06b88ea33\") " pod="calico-system/whisker-64656d67bd-dn9xb" Jan 26 18:28:48.349170 kubelet[2838]: I0126 18:28:48.349146 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/17a5ef19-3fa0-4382-add3-2ce06b88ea33-whisker-backend-key-pair\") pod \"whisker-64656d67bd-dn9xb\" (UID: \"17a5ef19-3fa0-4382-add3-2ce06b88ea33\") " pod="calico-system/whisker-64656d67bd-dn9xb" Jan 26 18:28:48.524447 containerd[1602]: time="2026-01-26T18:28:48.524290917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64656d67bd-dn9xb,Uid:17a5ef19-3fa0-4382-add3-2ce06b88ea33,Namespace:calico-system,Attempt:0,}" Jan 26 18:28:48.966975 systemd-networkd[1514]: cali1bbb8f6d759: Link UP Jan 26 18:28:48.971705 systemd-networkd[1514]: cali1bbb8f6d759: Gained carrier Jan 26 18:28:49.008610 kubelet[2838]: E0126 18:28:49.008457 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:49.019990 containerd[1602]: 2026-01-26 18:28:48.597 [INFO][4048] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 26 18:28:49.019990 containerd[1602]: 2026-01-26 18:28:48.628 [INFO][4048] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--64656d67bd--dn9xb-eth0 whisker-64656d67bd- calico-system 17a5ef19-3fa0-4382-add3-2ce06b88ea33 948 0 2026-01-26 18:28:48 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:64656d67bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-64656d67bd-dn9xb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1bbb8f6d759 [] [] }} ContainerID="30aa8f7e4de0c7fd7e61120c37e3dab380017e09c061dd243b57fd525274be97" Namespace="calico-system" Pod="whisker-64656d67bd-dn9xb" WorkloadEndpoint="localhost-k8s-whisker--64656d67bd--dn9xb-" Jan 26 18:28:49.019990 containerd[1602]: 2026-01-26 18:28:48.628 [INFO][4048] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="30aa8f7e4de0c7fd7e61120c37e3dab380017e09c061dd243b57fd525274be97" Namespace="calico-system" Pod="whisker-64656d67bd-dn9xb" WorkloadEndpoint="localhost-k8s-whisker--64656d67bd--dn9xb-eth0" Jan 26 18:28:49.019990 containerd[1602]: 2026-01-26 18:28:48.800 [INFO][4062] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30aa8f7e4de0c7fd7e61120c37e3dab380017e09c061dd243b57fd525274be97" HandleID="k8s-pod-network.30aa8f7e4de0c7fd7e61120c37e3dab380017e09c061dd243b57fd525274be97" Workload="localhost-k8s-whisker--64656d67bd--dn9xb-eth0" Jan 26 18:28:49.020478 containerd[1602]: 2026-01-26 18:28:48.803 [INFO][4062] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="30aa8f7e4de0c7fd7e61120c37e3dab380017e09c061dd243b57fd525274be97" HandleID="k8s-pod-network.30aa8f7e4de0c7fd7e61120c37e3dab380017e09c061dd243b57fd525274be97" Workload="localhost-k8s-whisker--64656d67bd--dn9xb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000512d30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-64656d67bd-dn9xb", "timestamp":"2026-01-26 18:28:48.800053502 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 26 18:28:49.020478 containerd[1602]: 2026-01-26 18:28:48.803 [INFO][4062] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 26 18:28:49.020478 containerd[1602]: 2026-01-26 18:28:48.803 [INFO][4062] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 26 18:28:49.020478 containerd[1602]: 2026-01-26 18:28:48.804 [INFO][4062] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 26 18:28:49.020478 containerd[1602]: 2026-01-26 18:28:48.825 [INFO][4062] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.30aa8f7e4de0c7fd7e61120c37e3dab380017e09c061dd243b57fd525274be97" host="localhost" Jan 26 18:28:49.020478 containerd[1602]: 2026-01-26 18:28:48.861 [INFO][4062] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 26 18:28:49.020478 containerd[1602]: 2026-01-26 18:28:48.869 [INFO][4062] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 26 18:28:49.020478 containerd[1602]: 2026-01-26 18:28:48.872 [INFO][4062] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 26 18:28:49.020478 containerd[1602]: 2026-01-26 18:28:48.877 [INFO][4062] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 26 18:28:49.020478 containerd[1602]: 2026-01-26 18:28:48.877 [INFO][4062] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.30aa8f7e4de0c7fd7e61120c37e3dab380017e09c061dd243b57fd525274be97" host="localhost" Jan 26 18:28:49.021117 containerd[1602]: 2026-01-26 18:28:48.881 [INFO][4062] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.30aa8f7e4de0c7fd7e61120c37e3dab380017e09c061dd243b57fd525274be97 Jan 26 18:28:49.021117 containerd[1602]: 2026-01-26 18:28:48.891 [INFO][4062] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.30aa8f7e4de0c7fd7e61120c37e3dab380017e09c061dd243b57fd525274be97" host="localhost" Jan 26 18:28:49.021117 containerd[1602]: 2026-01-26 18:28:48.901 [INFO][4062] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.30aa8f7e4de0c7fd7e61120c37e3dab380017e09c061dd243b57fd525274be97" host="localhost" Jan 26 18:28:49.021117 containerd[1602]: 2026-01-26 18:28:48.904 [INFO][4062] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.30aa8f7e4de0c7fd7e61120c37e3dab380017e09c061dd243b57fd525274be97" host="localhost" Jan 26 18:28:49.021117 containerd[1602]: 2026-01-26 18:28:48.905 [INFO][4062] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 26 18:28:49.021117 containerd[1602]: 2026-01-26 18:28:48.905 [INFO][4062] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="30aa8f7e4de0c7fd7e61120c37e3dab380017e09c061dd243b57fd525274be97" HandleID="k8s-pod-network.30aa8f7e4de0c7fd7e61120c37e3dab380017e09c061dd243b57fd525274be97" Workload="localhost-k8s-whisker--64656d67bd--dn9xb-eth0" Jan 26 18:28:49.021291 containerd[1602]: 2026-01-26 18:28:48.911 [INFO][4048] cni-plugin/k8s.go 418: Populated endpoint ContainerID="30aa8f7e4de0c7fd7e61120c37e3dab380017e09c061dd243b57fd525274be97" Namespace="calico-system" Pod="whisker-64656d67bd-dn9xb" WorkloadEndpoint="localhost-k8s-whisker--64656d67bd--dn9xb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--64656d67bd--dn9xb-eth0", GenerateName:"whisker-64656d67bd-", Namespace:"calico-system", SelfLink:"", UID:"17a5ef19-3fa0-4382-add3-2ce06b88ea33", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64656d67bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-64656d67bd-dn9xb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1bbb8f6d759", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:28:49.021291 containerd[1602]: 2026-01-26 18:28:48.912 [INFO][4048] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="30aa8f7e4de0c7fd7e61120c37e3dab380017e09c061dd243b57fd525274be97" Namespace="calico-system" Pod="whisker-64656d67bd-dn9xb" WorkloadEndpoint="localhost-k8s-whisker--64656d67bd--dn9xb-eth0" Jan 26 18:28:49.021570 containerd[1602]: 2026-01-26 18:28:48.912 [INFO][4048] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1bbb8f6d759 ContainerID="30aa8f7e4de0c7fd7e61120c37e3dab380017e09c061dd243b57fd525274be97" Namespace="calico-system" Pod="whisker-64656d67bd-dn9xb" WorkloadEndpoint="localhost-k8s-whisker--64656d67bd--dn9xb-eth0" Jan 26 18:28:49.021570 containerd[1602]: 2026-01-26 18:28:48.976 [INFO][4048] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="30aa8f7e4de0c7fd7e61120c37e3dab380017e09c061dd243b57fd525274be97" Namespace="calico-system" Pod="whisker-64656d67bd-dn9xb" WorkloadEndpoint="localhost-k8s-whisker--64656d67bd--dn9xb-eth0" Jan 26 18:28:49.021646 containerd[1602]: 2026-01-26 18:28:48.980 [INFO][4048] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="30aa8f7e4de0c7fd7e61120c37e3dab380017e09c061dd243b57fd525274be97" Namespace="calico-system" Pod="whisker-64656d67bd-dn9xb" WorkloadEndpoint="localhost-k8s-whisker--64656d67bd--dn9xb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--64656d67bd--dn9xb-eth0", GenerateName:"whisker-64656d67bd-", Namespace:"calico-system", SelfLink:"", UID:"17a5ef19-3fa0-4382-add3-2ce06b88ea33", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64656d67bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"30aa8f7e4de0c7fd7e61120c37e3dab380017e09c061dd243b57fd525274be97", Pod:"whisker-64656d67bd-dn9xb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1bbb8f6d759", MAC:"a6:9e:1b:1e:5b:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:28:49.021935 containerd[1602]: 2026-01-26 18:28:49.008 [INFO][4048] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="30aa8f7e4de0c7fd7e61120c37e3dab380017e09c061dd243b57fd525274be97" Namespace="calico-system" Pod="whisker-64656d67bd-dn9xb" WorkloadEndpoint="localhost-k8s-whisker--64656d67bd--dn9xb-eth0" Jan 26 18:28:49.391290 containerd[1602]: time="2026-01-26T18:28:49.391151947Z" level=info msg="connecting to shim 30aa8f7e4de0c7fd7e61120c37e3dab380017e09c061dd243b57fd525274be97" address="unix:///run/containerd/s/f8ed6ec966bff10b8beca5ba71b807b559f506ea9437ecdef6a741e23a7a8da5" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:28:49.442491 kubelet[2838]: I0126 18:28:49.442237 2838 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cceca80f-f824-4908-8b3f-ad347497224d" path="/var/lib/kubelet/pods/cceca80f-f824-4908-8b3f-ad347497224d/volumes" Jan 26 18:28:49.445638 containerd[1602]: time="2026-01-26T18:28:49.445424246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8ccd88f4-6fjn2,Uid:7718a3ef-224e-406c-b2ab-a63644f74c0b,Namespace:calico-apiserver,Attempt:0,}" Jan 26 18:28:49.571634 systemd[1]: Started cri-containerd-30aa8f7e4de0c7fd7e61120c37e3dab380017e09c061dd243b57fd525274be97.scope - libcontainer container 30aa8f7e4de0c7fd7e61120c37e3dab380017e09c061dd243b57fd525274be97. Jan 26 18:28:49.650000 audit: BPF prog-id=175 op=LOAD Jan 26 18:28:49.652000 audit: BPF prog-id=176 op=LOAD Jan 26 18:28:49.652000 audit[4223]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4205 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:49.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330616138663765346465306337666437653631313230633337653364 Jan 26 18:28:49.652000 audit: BPF prog-id=176 op=UNLOAD Jan 26 18:28:49.652000 audit[4223]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4205 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:49.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330616138663765346465306337666437653631313230633337653364 Jan 26 18:28:49.655000 audit: BPF prog-id=177 op=LOAD Jan 26 18:28:49.655000 audit[4223]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4205 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:49.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330616138663765346465306337666437653631313230633337653364 Jan 26 18:28:49.655000 audit: BPF prog-id=178 op=LOAD Jan 26 18:28:49.655000 audit[4223]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4205 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:49.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330616138663765346465306337666437653631313230633337653364 Jan 26 18:28:49.655000 audit: BPF prog-id=178 op=UNLOAD Jan 26 18:28:49.655000 audit[4223]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4205 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:49.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330616138663765346465306337666437653631313230633337653364 Jan 26 18:28:49.655000 audit: BPF prog-id=177 op=UNLOAD Jan 26 18:28:49.655000 audit[4223]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4205 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:49.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330616138663765346465306337666437653631313230633337653364 Jan 26 18:28:49.655000 audit: BPF prog-id=179 op=LOAD Jan 26 18:28:49.655000 audit[4223]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4205 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:49.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330616138663765346465306337666437653631313230633337653364 Jan 26 18:28:49.659131 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 26 18:28:49.769000 audit: BPF prog-id=180 op=LOAD Jan 26 18:28:49.769000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe7fec9ab0 a2=98 a3=1fffffffffffffff items=0 ppid=4085 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:49.769000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 26 18:28:49.770000 audit: BPF prog-id=180 op=UNLOAD Jan 26 18:28:49.770000 audit[4293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe7fec9a80 a3=0 items=0 ppid=4085 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:49.770000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 26 18:28:49.770000 audit: BPF prog-id=181 op=LOAD Jan 26 18:28:49.770000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe7fec9990 a2=94 a3=3 items=0 ppid=4085 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:49.770000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 26 18:28:49.770000 audit: BPF prog-id=181 op=UNLOAD Jan 26 18:28:49.770000 audit[4293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe7fec9990 a2=94 a3=3 items=0 ppid=4085 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:49.770000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 26 18:28:49.770000 audit: BPF prog-id=182 op=LOAD Jan 26 18:28:49.770000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe7fec99d0 a2=94 a3=7ffe7fec9bb0 items=0 ppid=4085 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:49.770000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 26 18:28:49.770000 audit: BPF prog-id=182 op=UNLOAD Jan 26 18:28:49.770000 audit[4293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe7fec99d0 a2=94 a3=7ffe7fec9bb0 items=0 ppid=4085 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:49.770000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 26 18:28:49.784000 audit: BPF prog-id=183 op=LOAD Jan 26 18:28:49.784000 audit[4294]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe49cb6050 a2=98 a3=3 items=0 ppid=4085 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:49.784000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:28:49.784000 audit: BPF prog-id=183 op=UNLOAD Jan 26 18:28:49.784000 audit[4294]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe49cb6020 a3=0 items=0 ppid=4085 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:49.784000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:28:49.784000 audit: BPF prog-id=184 op=LOAD Jan 26 18:28:49.784000 audit[4294]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe49cb5e40 a2=94 a3=54428f items=0 ppid=4085 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:49.784000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:28:49.785000 audit: BPF prog-id=184 op=UNLOAD Jan 26 18:28:49.785000 audit[4294]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe49cb5e40 a2=94 a3=54428f items=0 ppid=4085 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:49.785000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:28:49.785000 audit: BPF prog-id=185 op=LOAD Jan 26 18:28:49.785000 audit[4294]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe49cb5e70 a2=94 a3=2 items=0 ppid=4085 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:49.785000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:28:49.785000 audit: BPF prog-id=185 op=UNLOAD Jan 26 18:28:49.785000 audit[4294]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe49cb5e70 a2=0 a3=2 items=0 ppid=4085 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:49.785000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:28:49.823885 containerd[1602]: time="2026-01-26T18:28:49.823514711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64656d67bd-dn9xb,Uid:17a5ef19-3fa0-4382-add3-2ce06b88ea33,Namespace:calico-system,Attempt:0,} returns sandbox id \"30aa8f7e4de0c7fd7e61120c37e3dab380017e09c061dd243b57fd525274be97\"" Jan 26 18:28:49.836561 containerd[1602]: time="2026-01-26T18:28:49.836509177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 26 18:28:49.879542 systemd-networkd[1514]: calica57cf4f295: Link UP Jan 26 18:28:49.884509 systemd-networkd[1514]: calica57cf4f295: Gained carrier Jan 26 18:28:49.916446 containerd[1602]: 2026-01-26 18:28:49.561 [INFO][4230] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 26 18:28:49.916446 containerd[1602]: 2026-01-26 18:28:49.601 [INFO][4230] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c8ccd88f4--6fjn2-eth0 calico-apiserver-6c8ccd88f4- calico-apiserver 7718a3ef-224e-406c-b2ab-a63644f74c0b 876 0 2026-01-26 18:28:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c8ccd88f4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c8ccd88f4-6fjn2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calica57cf4f295 [] [] }} ContainerID="c9bb5817e8db195d759317d4dcfb0db3c726941cadf05cecc93778959a60af5f" Namespace="calico-apiserver" Pod="calico-apiserver-6c8ccd88f4-6fjn2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8ccd88f4--6fjn2-" Jan 26 18:28:49.916446 containerd[1602]: 2026-01-26 18:28:49.604 [INFO][4230] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c9bb5817e8db195d759317d4dcfb0db3c726941cadf05cecc93778959a60af5f" Namespace="calico-apiserver" Pod="calico-apiserver-6c8ccd88f4-6fjn2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8ccd88f4--6fjn2-eth0" Jan 26 18:28:49.916446 containerd[1602]: 2026-01-26 18:28:49.737 [INFO][4273] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9bb5817e8db195d759317d4dcfb0db3c726941cadf05cecc93778959a60af5f" HandleID="k8s-pod-network.c9bb5817e8db195d759317d4dcfb0db3c726941cadf05cecc93778959a60af5f" Workload="localhost-k8s-calico--apiserver--6c8ccd88f4--6fjn2-eth0" Jan 26 18:28:49.916965 containerd[1602]: 2026-01-26 18:28:49.737 [INFO][4273] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c9bb5817e8db195d759317d4dcfb0db3c726941cadf05cecc93778959a60af5f" HandleID="k8s-pod-network.c9bb5817e8db195d759317d4dcfb0db3c726941cadf05cecc93778959a60af5f" Workload="localhost-k8s-calico--apiserver--6c8ccd88f4--6fjn2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000436660), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c8ccd88f4-6fjn2", "timestamp":"2026-01-26 18:28:49.737453839 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 26 18:28:49.916965 containerd[1602]: 2026-01-26 18:28:49.737 [INFO][4273] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 26 18:28:49.916965 containerd[1602]: 2026-01-26 18:28:49.737 [INFO][4273] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 26 18:28:49.916965 containerd[1602]: 2026-01-26 18:28:49.737 [INFO][4273] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 26 18:28:49.916965 containerd[1602]: 2026-01-26 18:28:49.760 [INFO][4273] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c9bb5817e8db195d759317d4dcfb0db3c726941cadf05cecc93778959a60af5f" host="localhost" Jan 26 18:28:49.916965 containerd[1602]: 2026-01-26 18:28:49.782 [INFO][4273] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 26 18:28:49.916965 containerd[1602]: 2026-01-26 18:28:49.802 [INFO][4273] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 26 18:28:49.916965 containerd[1602]: 2026-01-26 18:28:49.807 [INFO][4273] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 26 18:28:49.916965 containerd[1602]: 2026-01-26 18:28:49.818 [INFO][4273] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 26 18:28:49.916965 containerd[1602]: 2026-01-26 18:28:49.819 [INFO][4273] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c9bb5817e8db195d759317d4dcfb0db3c726941cadf05cecc93778959a60af5f" host="localhost" Jan 26 18:28:49.917482 containerd[1602]: 2026-01-26 18:28:49.827 [INFO][4273] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c9bb5817e8db195d759317d4dcfb0db3c726941cadf05cecc93778959a60af5f Jan 26 18:28:49.917482 containerd[1602]: 2026-01-26 18:28:49.842 [INFO][4273] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c9bb5817e8db195d759317d4dcfb0db3c726941cadf05cecc93778959a60af5f" host="localhost" Jan 26 18:28:49.917482 containerd[1602]: 2026-01-26 18:28:49.855 [INFO][4273] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c9bb5817e8db195d759317d4dcfb0db3c726941cadf05cecc93778959a60af5f" host="localhost" Jan 26 18:28:49.917482 containerd[1602]: 2026-01-26 18:28:49.855 [INFO][4273] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c9bb5817e8db195d759317d4dcfb0db3c726941cadf05cecc93778959a60af5f" host="localhost" Jan 26 18:28:49.917482 containerd[1602]: 2026-01-26 18:28:49.855 [INFO][4273] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 26 18:28:49.917482 containerd[1602]: 2026-01-26 18:28:49.855 [INFO][4273] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c9bb5817e8db195d759317d4dcfb0db3c726941cadf05cecc93778959a60af5f" HandleID="k8s-pod-network.c9bb5817e8db195d759317d4dcfb0db3c726941cadf05cecc93778959a60af5f" Workload="localhost-k8s-calico--apiserver--6c8ccd88f4--6fjn2-eth0" Jan 26 18:28:49.917718 containerd[1602]: 2026-01-26 18:28:49.864 [INFO][4230] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c9bb5817e8db195d759317d4dcfb0db3c726941cadf05cecc93778959a60af5f" Namespace="calico-apiserver" Pod="calico-apiserver-6c8ccd88f4-6fjn2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8ccd88f4--6fjn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c8ccd88f4--6fjn2-eth0", GenerateName:"calico-apiserver-6c8ccd88f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"7718a3ef-224e-406c-b2ab-a63644f74c0b", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 28, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c8ccd88f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c8ccd88f4-6fjn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica57cf4f295", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:28:49.920002 containerd[1602]: 2026-01-26 18:28:49.864 [INFO][4230] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c9bb5817e8db195d759317d4dcfb0db3c726941cadf05cecc93778959a60af5f" Namespace="calico-apiserver" Pod="calico-apiserver-6c8ccd88f4-6fjn2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8ccd88f4--6fjn2-eth0" Jan 26 18:28:49.920002 containerd[1602]: 2026-01-26 18:28:49.864 [INFO][4230] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica57cf4f295 ContainerID="c9bb5817e8db195d759317d4dcfb0db3c726941cadf05cecc93778959a60af5f" Namespace="calico-apiserver" Pod="calico-apiserver-6c8ccd88f4-6fjn2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8ccd88f4--6fjn2-eth0" Jan 26 18:28:49.920002 containerd[1602]: 2026-01-26 18:28:49.885 [INFO][4230] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c9bb5817e8db195d759317d4dcfb0db3c726941cadf05cecc93778959a60af5f" Namespace="calico-apiserver" Pod="calico-apiserver-6c8ccd88f4-6fjn2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8ccd88f4--6fjn2-eth0" Jan 26 18:28:49.920122 containerd[1602]: 2026-01-26 18:28:49.886 [INFO][4230] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c9bb5817e8db195d759317d4dcfb0db3c726941cadf05cecc93778959a60af5f" Namespace="calico-apiserver" Pod="calico-apiserver-6c8ccd88f4-6fjn2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8ccd88f4--6fjn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c8ccd88f4--6fjn2-eth0", GenerateName:"calico-apiserver-6c8ccd88f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"7718a3ef-224e-406c-b2ab-a63644f74c0b", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 28, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c8ccd88f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c9bb5817e8db195d759317d4dcfb0db3c726941cadf05cecc93778959a60af5f", Pod:"calico-apiserver-6c8ccd88f4-6fjn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica57cf4f295", MAC:"6e:6b:86:46:45:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:28:49.920290 containerd[1602]: 2026-01-26 18:28:49.905 [INFO][4230] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c9bb5817e8db195d759317d4dcfb0db3c726941cadf05cecc93778959a60af5f" Namespace="calico-apiserver" Pod="calico-apiserver-6c8ccd88f4-6fjn2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8ccd88f4--6fjn2-eth0" Jan 26 18:28:49.968706 containerd[1602]: time="2026-01-26T18:28:49.968601365Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:28:49.970891 containerd[1602]: time="2026-01-26T18:28:49.970682757Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 26 18:28:49.971934 containerd[1602]: time="2026-01-26T18:28:49.970861179Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 26 18:28:49.972001 kubelet[2838]: E0126 18:28:49.971638 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 26 18:28:49.972884 kubelet[2838]: E0126 18:28:49.972072 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 26 18:28:49.981172 kubelet[2838]: E0126 18:28:49.981016 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:dc422a360d1a4f58bab37439bea74799,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pmnl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64656d67bd-dn9xb_calico-system(17a5ef19-3fa0-4382-add3-2ce06b88ea33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 26 18:28:49.989631 containerd[1602]: time="2026-01-26T18:28:49.989532729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 26 18:28:49.996842 containerd[1602]: time="2026-01-26T18:28:49.993318992Z" level=info msg="connecting to shim c9bb5817e8db195d759317d4dcfb0db3c726941cadf05cecc93778959a60af5f" address="unix:///run/containerd/s/0d8348498e75f63ffa684ad050baf96e5afaa6ba41f624141706c294c7eb615d" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:28:50.078273 systemd[1]: Started cri-containerd-c9bb5817e8db195d759317d4dcfb0db3c726941cadf05cecc93778959a60af5f.scope - libcontainer container c9bb5817e8db195d759317d4dcfb0db3c726941cadf05cecc93778959a60af5f. Jan 26 18:28:50.095000 containerd[1602]: time="2026-01-26T18:28:50.094535459Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:28:50.097143 containerd[1602]: time="2026-01-26T18:28:50.096899574Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 26 18:28:50.097143 containerd[1602]: time="2026-01-26T18:28:50.097046738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 26 18:28:50.097288 kubelet[2838]: E0126 18:28:50.097189 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 26 18:28:50.097288 kubelet[2838]: E0126 18:28:50.097238 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 26 18:28:50.097951 kubelet[2838]: E0126 18:28:50.097466 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pmnl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64656d67bd-dn9xb_calico-system(17a5ef19-3fa0-4382-add3-2ce06b88ea33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 26 18:28:50.099176 kubelet[2838]: E0126 18:28:50.098648 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64656d67bd-dn9xb" podUID="17a5ef19-3fa0-4382-add3-2ce06b88ea33" Jan 26 18:28:50.110000 audit: BPF prog-id=186 op=LOAD Jan 26 18:28:50.111000 audit: BPF prog-id=187 op=LOAD Jan 26 18:28:50.111000 audit[4331]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4321 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339626235383137653864623139356437353933313764346463666230 Jan 26 18:28:50.111000 audit: BPF prog-id=187 op=UNLOAD Jan 26 18:28:50.111000 audit[4331]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4321 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339626235383137653864623139356437353933313764346463666230 Jan 26 18:28:50.112000 audit: BPF prog-id=188 op=LOAD Jan 26 18:28:50.112000 audit[4331]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4321 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339626235383137653864623139356437353933313764346463666230 Jan 26 18:28:50.112000 audit: BPF prog-id=189 op=LOAD Jan 26 18:28:50.112000 audit[4331]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4321 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339626235383137653864623139356437353933313764346463666230 Jan 26 18:28:50.112000 audit: BPF prog-id=189 op=UNLOAD Jan 26 18:28:50.112000 audit[4331]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4321 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339626235383137653864623139356437353933313764346463666230 Jan 26 18:28:50.112000 audit: BPF prog-id=188 op=UNLOAD Jan 26 18:28:50.112000 audit[4331]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4321 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339626235383137653864623139356437353933313764346463666230 Jan 26 18:28:50.112000 audit: BPF prog-id=190 op=LOAD Jan 26 18:28:50.112000 audit[4331]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4321 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339626235383137653864623139356437353933313764346463666230 Jan 26 18:28:50.115640 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 26 18:28:50.142000 audit: BPF prog-id=191 op=LOAD Jan 26 18:28:50.142000 audit[4294]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe49cb5d30 a2=94 a3=1 items=0 ppid=4085 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.142000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:28:50.143000 audit: BPF prog-id=191 op=UNLOAD Jan 26 18:28:50.143000 audit[4294]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe49cb5d30 a2=94 a3=1 items=0 ppid=4085 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.143000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:28:50.158000 audit: BPF prog-id=192 op=LOAD Jan 26 18:28:50.158000 audit[4294]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe49cb5d20 a2=94 a3=4 items=0 ppid=4085 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.158000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:28:50.158000 audit: BPF prog-id=192 op=UNLOAD Jan 26 18:28:50.158000 audit[4294]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe49cb5d20 a2=0 a3=4 items=0 ppid=4085 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.158000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:28:50.158000 audit: BPF prog-id=193 op=LOAD Jan 26 18:28:50.158000 audit[4294]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe49cb5b80 a2=94 a3=5 items=0 ppid=4085 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.158000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:28:50.158000 audit: BPF prog-id=193 op=UNLOAD Jan 26 18:28:50.158000 audit[4294]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe49cb5b80 a2=0 a3=5 items=0 ppid=4085 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.158000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:28:50.158000 audit: BPF prog-id=194 op=LOAD Jan 26 18:28:50.158000 audit[4294]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe49cb5da0 a2=94 a3=6 items=0 ppid=4085 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.158000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:28:50.158000 audit: BPF prog-id=194 op=UNLOAD Jan 26 18:28:50.158000 audit[4294]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe49cb5da0 a2=0 a3=6 items=0 ppid=4085 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.158000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:28:50.159000 audit: BPF prog-id=195 op=LOAD Jan 26 18:28:50.159000 audit[4294]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe49cb5550 a2=94 a3=88 items=0 ppid=4085 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.159000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:28:50.159000 audit: BPF prog-id=196 op=LOAD Jan 26 18:28:50.159000 audit[4294]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe49cb53d0 a2=94 a3=2 items=0 ppid=4085 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.159000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:28:50.159000 audit: BPF prog-id=196 op=UNLOAD Jan 26 18:28:50.159000 audit[4294]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe49cb5400 a2=0 a3=7ffe49cb5500 items=0 ppid=4085 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.159000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:28:50.160000 audit: BPF prog-id=195 op=UNLOAD Jan 26 18:28:50.160000 audit[4294]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=a836d10 a2=0 a3=2c17c676f539a68e items=0 ppid=4085 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.160000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:28:50.192000 audit: BPF prog-id=197 op=LOAD Jan 26 18:28:50.192000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffed7deeac0 a2=98 a3=1999999999999999 items=0 ppid=4085 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.192000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 26 18:28:50.192000 audit: BPF prog-id=197 op=UNLOAD Jan 26 18:28:50.192000 audit[4357]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffed7deea90 a3=0 items=0 ppid=4085 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.192000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 26 18:28:50.192000 audit: BPF prog-id=198 op=LOAD Jan 26 18:28:50.192000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffed7dee9a0 a2=94 a3=ffff items=0 ppid=4085 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.192000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 26 18:28:50.192000 audit: BPF prog-id=198 op=UNLOAD Jan 26 18:28:50.192000 audit[4357]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffed7dee9a0 a2=94 a3=ffff items=0 ppid=4085 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.192000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 26 18:28:50.192000 audit: BPF prog-id=199 op=LOAD Jan 26 18:28:50.192000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffed7dee9e0 a2=94 a3=7ffed7deebc0 items=0 ppid=4085 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.192000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 26 18:28:50.192000 audit: BPF prog-id=199 op=UNLOAD Jan 26 18:28:50.192000 audit[4357]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffed7dee9e0 a2=94 a3=7ffed7deebc0 items=0 ppid=4085 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.192000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 26 18:28:50.200622 containerd[1602]: time="2026-01-26T18:28:50.200540380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8ccd88f4-6fjn2,Uid:7718a3ef-224e-406c-b2ab-a63644f74c0b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c9bb5817e8db195d759317d4dcfb0db3c726941cadf05cecc93778959a60af5f\"" Jan 26 18:28:50.204578 containerd[1602]: time="2026-01-26T18:28:50.204247359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 26 18:28:50.277706 containerd[1602]: time="2026-01-26T18:28:50.277031041Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:28:50.281277 containerd[1602]: time="2026-01-26T18:28:50.281217221Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 26 18:28:50.282230 containerd[1602]: time="2026-01-26T18:28:50.281310193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 26 18:28:50.282306 kubelet[2838]: E0126 18:28:50.281917 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:28:50.282306 kubelet[2838]: E0126 18:28:50.281970 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:28:50.282306 kubelet[2838]: E0126 18:28:50.282120 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p6pl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8ccd88f4-6fjn2_calico-apiserver(7718a3ef-224e-406c-b2ab-a63644f74c0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 26 18:28:50.284017 kubelet[2838]: E0126 18:28:50.283960 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-6fjn2" podUID="7718a3ef-224e-406c-b2ab-a63644f74c0b" Jan 26 18:28:50.345553 systemd-networkd[1514]: cali1bbb8f6d759: Gained IPv6LL Jan 26 18:28:50.379009 systemd-networkd[1514]: vxlan.calico: Link UP Jan 26 18:28:50.379017 systemd-networkd[1514]: vxlan.calico: Gained carrier Jan 26 18:28:50.452000 audit: BPF prog-id=200 op=LOAD Jan 26 18:28:50.452000 audit[4382]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc377de0a0 a2=98 a3=0 items=0 ppid=4085 pid=4382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.452000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:28:50.455000 audit: BPF prog-id=200 op=UNLOAD Jan 26 18:28:50.455000 audit[4382]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc377de070 a3=0 items=0 ppid=4085 pid=4382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.455000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:28:50.455000 audit: BPF prog-id=201 op=LOAD Jan 26 18:28:50.455000 audit[4382]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc377ddeb0 a2=94 a3=54428f items=0 ppid=4085 pid=4382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.455000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:28:50.456000 audit: BPF prog-id=201 op=UNLOAD Jan 26 18:28:50.456000 audit[4382]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc377ddeb0 a2=94 a3=54428f items=0 ppid=4085 pid=4382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.456000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:28:50.456000 audit: BPF prog-id=202 op=LOAD Jan 26 18:28:50.456000 audit[4382]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc377ddee0 a2=94 a3=2 items=0 ppid=4085 pid=4382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.456000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:28:50.456000 audit: BPF prog-id=202 op=UNLOAD Jan 26 18:28:50.456000 audit[4382]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc377ddee0 a2=0 a3=2 items=0 ppid=4085 pid=4382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.456000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:28:50.456000 audit: BPF prog-id=203 op=LOAD Jan 26 18:28:50.456000 audit[4382]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc377ddc90 a2=94 a3=4 items=0 ppid=4085 pid=4382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.456000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:28:50.456000 audit: BPF prog-id=203 op=UNLOAD Jan 26 18:28:50.456000 audit[4382]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc377ddc90 a2=94 a3=4 items=0 ppid=4085 pid=4382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.456000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:28:50.457000 audit: BPF prog-id=204 op=LOAD Jan 26 18:28:50.457000 audit[4382]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc377ddd90 a2=94 a3=7ffc377ddf10 items=0 ppid=4085 pid=4382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.457000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:28:50.457000 audit: BPF prog-id=204 op=UNLOAD Jan 26 18:28:50.457000 audit[4382]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc377ddd90 a2=0 a3=7ffc377ddf10 items=0 ppid=4085 pid=4382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.457000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:28:50.462000 audit: BPF prog-id=205 op=LOAD Jan 26 18:28:50.462000 audit[4382]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc377dd4c0 a2=94 a3=2 items=0 ppid=4085 pid=4382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.462000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:28:50.467000 audit: BPF prog-id=205 op=UNLOAD Jan 26 18:28:50.467000 audit[4382]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc377dd4c0 a2=0 a3=2 items=0 ppid=4085 pid=4382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.467000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:28:50.467000 audit: BPF prog-id=206 op=LOAD Jan 26 18:28:50.467000 audit[4382]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc377dd5c0 a2=94 a3=30 items=0 ppid=4085 pid=4382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.467000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:28:50.560000 audit: BPF prog-id=207 op=LOAD Jan 26 18:28:50.560000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec3e4bd80 a2=98 a3=0 items=0 ppid=4085 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.560000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:28:50.561000 audit: BPF prog-id=207 op=UNLOAD Jan 26 18:28:50.561000 audit[4391]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffec3e4bd50 a3=0 items=0 ppid=4085 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.561000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:28:50.561000 audit: BPF prog-id=208 op=LOAD Jan 26 18:28:50.561000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffec3e4bb70 a2=94 a3=54428f items=0 ppid=4085 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.561000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:28:50.561000 audit: BPF prog-id=208 op=UNLOAD Jan 26 18:28:50.561000 audit[4391]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffec3e4bb70 a2=94 a3=54428f items=0 ppid=4085 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.561000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:28:50.561000 audit: BPF prog-id=209 op=LOAD Jan 26 18:28:50.561000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffec3e4bba0 a2=94 a3=2 items=0 ppid=4085 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.561000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:28:50.561000 audit: BPF prog-id=209 op=UNLOAD Jan 26 18:28:50.561000 audit[4391]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffec3e4bba0 a2=0 a3=2 items=0 ppid=4085 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.561000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:28:50.873000 audit: BPF prog-id=210 op=LOAD Jan 26 18:28:50.873000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffec3e4ba60 a2=94 a3=1 items=0 ppid=4085 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.873000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:28:50.874000 audit: BPF prog-id=210 op=UNLOAD Jan 26 18:28:50.874000 audit[4391]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffec3e4ba60 a2=94 a3=1 items=0 ppid=4085 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.874000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:28:50.887000 audit: BPF prog-id=211 op=LOAD Jan 26 18:28:50.887000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffec3e4ba50 a2=94 a3=4 items=0 ppid=4085 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.887000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:28:50.888000 audit: BPF prog-id=211 op=UNLOAD Jan 26 18:28:50.888000 audit[4391]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffec3e4ba50 a2=0 a3=4 items=0 ppid=4085 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.888000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:28:50.888000 audit: BPF prog-id=212 op=LOAD Jan 26 18:28:50.888000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffec3e4b8b0 a2=94 a3=5 items=0 ppid=4085 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.888000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:28:50.888000 audit: BPF prog-id=212 op=UNLOAD Jan 26 18:28:50.888000 audit[4391]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffec3e4b8b0 a2=0 a3=5 items=0 ppid=4085 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.888000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:28:50.888000 audit: BPF prog-id=213 op=LOAD Jan 26 18:28:50.888000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffec3e4bad0 a2=94 a3=6 items=0 ppid=4085 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.888000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:28:50.889000 audit: BPF prog-id=213 op=UNLOAD Jan 26 18:28:50.889000 audit[4391]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffec3e4bad0 a2=0 a3=6 items=0 ppid=4085 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.889000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:28:50.889000 audit: BPF prog-id=214 op=LOAD Jan 26 18:28:50.889000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffec3e4b280 a2=94 a3=88 items=0 ppid=4085 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.889000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:28:50.889000 audit: BPF prog-id=215 op=LOAD Jan 26 18:28:50.889000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffec3e4b100 a2=94 a3=2 items=0 ppid=4085 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.889000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:28:50.889000 audit: BPF prog-id=215 op=UNLOAD Jan 26 18:28:50.889000 audit[4391]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffec3e4b130 a2=0 a3=7ffec3e4b230 items=0 ppid=4085 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.889000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:28:50.890000 audit: BPF prog-id=214 op=UNLOAD Jan 26 18:28:50.890000 audit[4391]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=238a1d10 a2=0 a3=a9ce17ddb22feced items=0 ppid=4085 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.890000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:28:50.908000 audit: BPF prog-id=206 op=UNLOAD Jan 26 18:28:50.908000 audit[4085]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c00117b8c0 a2=0 a3=0 items=0 ppid=4072 pid=4085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:50.908000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 26 18:28:51.035000 audit[4418]: NETFILTER_CFG table=mangle:121 family=2 entries=16 op=nft_register_chain pid=4418 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 26 18:28:51.035000 audit[4418]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffdffe45cb0 a2=0 a3=7ffdffe45c9c items=0 ppid=4085 pid=4418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:51.035000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 26 18:28:51.038000 audit[4417]: NETFILTER_CFG table=nat:122 family=2 entries=15 op=nft_register_chain pid=4417 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 26 18:28:51.038000 audit[4417]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc5dbede80 a2=0 a3=7ffc5dbede6c items=0 ppid=4085 pid=4417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:51.038000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 26 18:28:51.051982 kubelet[2838]: E0126 18:28:51.051251 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64656d67bd-dn9xb" podUID="17a5ef19-3fa0-4382-add3-2ce06b88ea33" Jan 26 18:28:51.056608 kubelet[2838]: E0126 18:28:51.056452 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-6fjn2" podUID="7718a3ef-224e-406c-b2ab-a63644f74c0b" Jan 26 18:28:51.081000 audit[4416]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4416 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 26 18:28:51.081000 audit[4416]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc94065e20 a2=0 a3=7ffc94065e0c items=0 ppid=4085 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:51.081000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 26 18:28:51.095000 audit[4419]: NETFILTER_CFG table=filter:124 family=2 entries=136 op=nft_register_chain pid=4419 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 26 18:28:51.095000 audit[4419]: SYSCALL arch=c000003e syscall=46 success=yes exit=78424 a0=3 a1=7ffcb905f2c0 a2=0 a3=7ffcb905f2ac items=0 ppid=4085 pid=4419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:51.095000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 26 18:28:51.146000 audit[4428]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=4428 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:51.146000 audit[4428]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff43b8b410 a2=0 a3=7fff43b8b3fc items=0 ppid=3000 pid=4428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:51.146000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:51.153000 audit[4428]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=4428 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:51.153000 audit[4428]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff43b8b410 a2=0 a3=0 items=0 ppid=3000 pid=4428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:51.153000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:51.180000 audit[4431]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=4431 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:51.180000 audit[4431]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdf20afc00 a2=0 a3=7ffdf20afbec items=0 ppid=3000 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:51.180000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:51.190000 audit[4431]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=4431 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:51.190000 audit[4431]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdf20afc00 a2=0 a3=0 items=0 ppid=3000 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:51.190000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:51.241305 systemd-networkd[1514]: calica57cf4f295: Gained IPv6LL Jan 26 18:28:51.437989 kubelet[2838]: E0126 18:28:51.437693 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:51.439563 containerd[1602]: time="2026-01-26T18:28:51.439269710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8vs84,Uid:2234becc-c720-462c-a720-e90ca2659bbd,Namespace:kube-system,Attempt:0,}" Jan 26 18:28:51.441107 containerd[1602]: time="2026-01-26T18:28:51.440731149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56d588489d-lsq6l,Uid:5f250e57-76e7-4282-9d3b-aa7149c84f3a,Namespace:calico-system,Attempt:0,}" Jan 26 18:28:51.720408 systemd-networkd[1514]: cali7855bf40a8c: Link UP Jan 26 18:28:51.721886 systemd-networkd[1514]: cali7855bf40a8c: Gained carrier Jan 26 18:28:51.743673 containerd[1602]: 2026-01-26 18:28:51.542 [INFO][4432] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--8vs84-eth0 coredns-674b8bbfcf- kube-system 2234becc-c720-462c-a720-e90ca2659bbd 870 0 2026-01-26 18:27:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-8vs84 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7855bf40a8c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-8vs84" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8vs84-" Jan 26 18:28:51.743673 containerd[1602]: 2026-01-26 18:28:51.543 [INFO][4432] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-8vs84" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8vs84-eth0" Jan 26 18:28:51.743673 containerd[1602]: 2026-01-26 18:28:51.634 [INFO][4462] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d" HandleID="k8s-pod-network.6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d" Workload="localhost-k8s-coredns--674b8bbfcf--8vs84-eth0" Jan 26 18:28:51.744202 containerd[1602]: 2026-01-26 18:28:51.634 [INFO][4462] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d" HandleID="k8s-pod-network.6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d" Workload="localhost-k8s-coredns--674b8bbfcf--8vs84-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df170), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-8vs84", "timestamp":"2026-01-26 18:28:51.634429049 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 26 18:28:51.744202 containerd[1602]: 2026-01-26 18:28:51.635 [INFO][4462] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 26 18:28:51.744202 containerd[1602]: 2026-01-26 18:28:51.635 [INFO][4462] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 26 18:28:51.744202 containerd[1602]: 2026-01-26 18:28:51.635 [INFO][4462] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 26 18:28:51.744202 containerd[1602]: 2026-01-26 18:28:51.654 [INFO][4462] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d" host="localhost" Jan 26 18:28:51.744202 containerd[1602]: 2026-01-26 18:28:51.667 [INFO][4462] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 26 18:28:51.744202 containerd[1602]: 2026-01-26 18:28:51.680 [INFO][4462] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 26 18:28:51.744202 containerd[1602]: 2026-01-26 18:28:51.682 [INFO][4462] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 26 18:28:51.744202 containerd[1602]: 2026-01-26 18:28:51.687 [INFO][4462] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 26 18:28:51.744202 containerd[1602]: 2026-01-26 18:28:51.687 [INFO][4462] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d" host="localhost" Jan 26 18:28:51.745006 containerd[1602]: 2026-01-26 18:28:51.691 [INFO][4462] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d Jan 26 18:28:51.745006 containerd[1602]: 2026-01-26 18:28:51.697 [INFO][4462] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d" host="localhost" Jan 26 18:28:51.745006 containerd[1602]: 2026-01-26 18:28:51.706 [INFO][4462] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d" host="localhost" Jan 26 18:28:51.745006 containerd[1602]: 2026-01-26 18:28:51.706 [INFO][4462] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d" host="localhost" Jan 26 18:28:51.745006 containerd[1602]: 2026-01-26 18:28:51.706 [INFO][4462] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 26 18:28:51.745006 containerd[1602]: 2026-01-26 18:28:51.706 [INFO][4462] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d" HandleID="k8s-pod-network.6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d" Workload="localhost-k8s-coredns--674b8bbfcf--8vs84-eth0" Jan 26 18:28:51.745204 containerd[1602]: 2026-01-26 18:28:51.713 [INFO][4432] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-8vs84" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8vs84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--8vs84-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2234becc-c720-462c-a720-e90ca2659bbd", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-8vs84", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7855bf40a8c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:28:51.745460 containerd[1602]: 2026-01-26 18:28:51.713 [INFO][4432] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-8vs84" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8vs84-eth0" Jan 26 18:28:51.745460 containerd[1602]: 2026-01-26 18:28:51.713 [INFO][4432] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7855bf40a8c ContainerID="6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-8vs84" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8vs84-eth0" Jan 26 18:28:51.745460 containerd[1602]: 2026-01-26 18:28:51.721 [INFO][4432] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-8vs84" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8vs84-eth0" Jan 26 18:28:51.745575 containerd[1602]: 2026-01-26 18:28:51.722 [INFO][4432] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-8vs84" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8vs84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--8vs84-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2234becc-c720-462c-a720-e90ca2659bbd", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d", Pod:"coredns-674b8bbfcf-8vs84", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7855bf40a8c", MAC:"b6:c8:3a:b1:f6:27", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:28:51.745575 containerd[1602]: 2026-01-26 18:28:51.739 [INFO][4432] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-8vs84" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8vs84-eth0" Jan 26 18:28:51.792000 audit[4487]: NETFILTER_CFG table=filter:129 family=2 entries=46 op=nft_register_chain pid=4487 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 26 18:28:51.803254 kernel: kauditd_printk_skb: 259 callbacks suppressed Jan 26 18:28:51.803398 kernel: audit: type=1325 audit(1769452131.792:659): table=filter:129 family=2 entries=46 op=nft_register_chain pid=4487 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 26 18:28:51.792000 audit[4487]: SYSCALL arch=c000003e syscall=46 success=yes exit=23740 a0=3 a1=7ffc5f584770 a2=0 a3=7ffc5f58475c items=0 ppid=4085 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:51.840576 kernel: audit: type=1300 audit(1769452131.792:659): arch=c000003e syscall=46 success=yes exit=23740 a0=3 a1=7ffc5f584770 a2=0 a3=7ffc5f58475c items=0 ppid=4085 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:51.840697 kernel: audit: type=1327 audit(1769452131.792:659): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 26 18:28:51.792000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 26 18:28:51.842863 containerd[1602]: time="2026-01-26T18:28:51.842198927Z" level=info msg="connecting to shim 6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d" address="unix:///run/containerd/s/96669645ef150a0e6fe9436b0f3766081e06402fdd9bcf98fb916985fed03d18" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:28:51.861591 systemd-networkd[1514]: califc6161df110: Link UP Jan 26 18:28:51.864128 systemd-networkd[1514]: califc6161df110: Gained carrier Jan 26 18:28:51.914671 containerd[1602]: 2026-01-26 18:28:51.572 [INFO][4438] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--56d588489d--lsq6l-eth0 calico-kube-controllers-56d588489d- calico-system 5f250e57-76e7-4282-9d3b-aa7149c84f3a 878 0 2026-01-26 18:28:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:56d588489d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-56d588489d-lsq6l eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califc6161df110 [] [] }} ContainerID="615bab6ca1d2e97a7a68516999bcb1ed62acfaf532db6ef89549ff26a6676234" Namespace="calico-system" Pod="calico-kube-controllers-56d588489d-lsq6l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56d588489d--lsq6l-" Jan 26 18:28:51.914671 containerd[1602]: 2026-01-26 18:28:51.572 [INFO][4438] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="615bab6ca1d2e97a7a68516999bcb1ed62acfaf532db6ef89549ff26a6676234" Namespace="calico-system" Pod="calico-kube-controllers-56d588489d-lsq6l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56d588489d--lsq6l-eth0" Jan 26 18:28:51.914671 containerd[1602]: 2026-01-26 18:28:51.670 [INFO][4469] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="615bab6ca1d2e97a7a68516999bcb1ed62acfaf532db6ef89549ff26a6676234" HandleID="k8s-pod-network.615bab6ca1d2e97a7a68516999bcb1ed62acfaf532db6ef89549ff26a6676234" Workload="localhost-k8s-calico--kube--controllers--56d588489d--lsq6l-eth0" Jan 26 18:28:51.914671 containerd[1602]: 2026-01-26 18:28:51.671 [INFO][4469] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="615bab6ca1d2e97a7a68516999bcb1ed62acfaf532db6ef89549ff26a6676234" HandleID="k8s-pod-network.615bab6ca1d2e97a7a68516999bcb1ed62acfaf532db6ef89549ff26a6676234" Workload="localhost-k8s-calico--kube--controllers--56d588489d--lsq6l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e160), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-56d588489d-lsq6l", "timestamp":"2026-01-26 18:28:51.670736562 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 26 18:28:51.914671 containerd[1602]: 2026-01-26 18:28:51.671 [INFO][4469] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 26 18:28:51.914671 containerd[1602]: 2026-01-26 18:28:51.706 [INFO][4469] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 26 18:28:51.914671 containerd[1602]: 2026-01-26 18:28:51.706 [INFO][4469] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 26 18:28:51.914671 containerd[1602]: 2026-01-26 18:28:51.756 [INFO][4469] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.615bab6ca1d2e97a7a68516999bcb1ed62acfaf532db6ef89549ff26a6676234" host="localhost" Jan 26 18:28:51.914671 containerd[1602]: 2026-01-26 18:28:51.767 [INFO][4469] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 26 18:28:51.914671 containerd[1602]: 2026-01-26 18:28:51.778 [INFO][4469] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 26 18:28:51.914671 containerd[1602]: 2026-01-26 18:28:51.783 [INFO][4469] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 26 18:28:51.914671 containerd[1602]: 2026-01-26 18:28:51.793 [INFO][4469] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 26 18:28:51.914671 containerd[1602]: 2026-01-26 18:28:51.793 [INFO][4469] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.615bab6ca1d2e97a7a68516999bcb1ed62acfaf532db6ef89549ff26a6676234" host="localhost" Jan 26 18:28:51.914671 containerd[1602]: 2026-01-26 18:28:51.798 [INFO][4469] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.615bab6ca1d2e97a7a68516999bcb1ed62acfaf532db6ef89549ff26a6676234 Jan 26 18:28:51.914671 containerd[1602]: 2026-01-26 18:28:51.812 [INFO][4469] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.615bab6ca1d2e97a7a68516999bcb1ed62acfaf532db6ef89549ff26a6676234" host="localhost" Jan 26 18:28:51.914671 containerd[1602]: 2026-01-26 18:28:51.834 [INFO][4469] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.615bab6ca1d2e97a7a68516999bcb1ed62acfaf532db6ef89549ff26a6676234" host="localhost" Jan 26 18:28:51.914671 containerd[1602]: 2026-01-26 18:28:51.835 [INFO][4469] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.615bab6ca1d2e97a7a68516999bcb1ed62acfaf532db6ef89549ff26a6676234" host="localhost" Jan 26 18:28:51.914671 containerd[1602]: 2026-01-26 18:28:51.835 [INFO][4469] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 26 18:28:51.914671 containerd[1602]: 2026-01-26 18:28:51.835 [INFO][4469] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="615bab6ca1d2e97a7a68516999bcb1ed62acfaf532db6ef89549ff26a6676234" HandleID="k8s-pod-network.615bab6ca1d2e97a7a68516999bcb1ed62acfaf532db6ef89549ff26a6676234" Workload="localhost-k8s-calico--kube--controllers--56d588489d--lsq6l-eth0" Jan 26 18:28:51.916976 containerd[1602]: 2026-01-26 18:28:51.849 [INFO][4438] cni-plugin/k8s.go 418: Populated endpoint ContainerID="615bab6ca1d2e97a7a68516999bcb1ed62acfaf532db6ef89549ff26a6676234" Namespace="calico-system" Pod="calico-kube-controllers-56d588489d-lsq6l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56d588489d--lsq6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56d588489d--lsq6l-eth0", GenerateName:"calico-kube-controllers-56d588489d-", Namespace:"calico-system", SelfLink:"", UID:"5f250e57-76e7-4282-9d3b-aa7149c84f3a", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 28, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56d588489d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-56d588489d-lsq6l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califc6161df110", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:28:51.916976 containerd[1602]: 2026-01-26 18:28:51.849 [INFO][4438] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="615bab6ca1d2e97a7a68516999bcb1ed62acfaf532db6ef89549ff26a6676234" Namespace="calico-system" Pod="calico-kube-controllers-56d588489d-lsq6l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56d588489d--lsq6l-eth0" Jan 26 18:28:51.916976 containerd[1602]: 2026-01-26 18:28:51.849 [INFO][4438] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc6161df110 ContainerID="615bab6ca1d2e97a7a68516999bcb1ed62acfaf532db6ef89549ff26a6676234" Namespace="calico-system" Pod="calico-kube-controllers-56d588489d-lsq6l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56d588489d--lsq6l-eth0" Jan 26 18:28:51.916976 containerd[1602]: 2026-01-26 18:28:51.871 [INFO][4438] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="615bab6ca1d2e97a7a68516999bcb1ed62acfaf532db6ef89549ff26a6676234" Namespace="calico-system" Pod="calico-kube-controllers-56d588489d-lsq6l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56d588489d--lsq6l-eth0" Jan 26 18:28:51.916976 containerd[1602]: 2026-01-26 18:28:51.875 [INFO][4438] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="615bab6ca1d2e97a7a68516999bcb1ed62acfaf532db6ef89549ff26a6676234" Namespace="calico-system" Pod="calico-kube-controllers-56d588489d-lsq6l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56d588489d--lsq6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56d588489d--lsq6l-eth0", GenerateName:"calico-kube-controllers-56d588489d-", Namespace:"calico-system", SelfLink:"", UID:"5f250e57-76e7-4282-9d3b-aa7149c84f3a", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 28, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56d588489d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"615bab6ca1d2e97a7a68516999bcb1ed62acfaf532db6ef89549ff26a6676234", Pod:"calico-kube-controllers-56d588489d-lsq6l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califc6161df110", MAC:"96:fc:9b:74:44:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:28:51.916976 containerd[1602]: 2026-01-26 18:28:51.905 [INFO][4438] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="615bab6ca1d2e97a7a68516999bcb1ed62acfaf532db6ef89549ff26a6676234" Namespace="calico-system" Pod="calico-kube-controllers-56d588489d-lsq6l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56d588489d--lsq6l-eth0" Jan 26 18:28:51.935218 systemd[1]: Started cri-containerd-6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d.scope - libcontainer container 6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d. Jan 26 18:28:51.936000 audit[4527]: NETFILTER_CFG table=filter:130 family=2 entries=44 op=nft_register_chain pid=4527 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 26 18:28:51.958177 kernel: audit: type=1325 audit(1769452131.936:660): table=filter:130 family=2 entries=44 op=nft_register_chain pid=4527 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 26 18:28:51.958278 kernel: audit: type=1300 audit(1769452131.936:660): arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7fff316bfd40 a2=0 a3=7fff316bfd2c items=0 ppid=4085 pid=4527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:51.936000 audit[4527]: SYSCALL arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7fff316bfd40 a2=0 a3=7fff316bfd2c items=0 ppid=4085 pid=4527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:51.936000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 26 18:28:52.031945 kernel: audit: type=1327 audit(1769452131.936:660): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 26 18:28:52.038000 audit: BPF prog-id=216 op=LOAD Jan 26 18:28:52.053964 kernel: audit: type=1334 audit(1769452132.038:661): prog-id=216 op=LOAD Jan 26 18:28:52.054067 kernel: audit: type=1334 audit(1769452132.040:662): prog-id=217 op=LOAD Jan 26 18:28:52.040000 audit: BPF prog-id=217 op=LOAD Jan 26 18:28:52.040000 audit[4509]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4496 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:52.073142 kubelet[2838]: E0126 18:28:52.057428 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-6fjn2" podUID="7718a3ef-224e-406c-b2ab-a63644f74c0b" Jan 26 18:28:52.054719 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 26 18:28:52.073660 containerd[1602]: time="2026-01-26T18:28:52.070279055Z" level=info msg="connecting to shim 615bab6ca1d2e97a7a68516999bcb1ed62acfaf532db6ef89549ff26a6676234" address="unix:///run/containerd/s/a8650cb6b7e9604b04c747a5b15aec72f6ce009441c2ddf0b9ba97248febba79" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:28:52.081147 kernel: audit: type=1300 audit(1769452132.040:662): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4496 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:52.081198 kernel: audit: type=1327 audit(1769452132.040:662): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663343935383038343237376331313566613530346165346638653533 Jan 26 18:28:52.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663343935383038343237376331313566613530346165346638653533 Jan 26 18:28:52.040000 audit: BPF prog-id=217 op=UNLOAD Jan 26 18:28:52.040000 audit[4509]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4496 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:52.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663343935383038343237376331313566613530346165346638653533 Jan 26 18:28:52.042000 audit: BPF prog-id=218 op=LOAD Jan 26 18:28:52.042000 audit[4509]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4496 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:52.042000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663343935383038343237376331313566613530346165346638653533 Jan 26 18:28:52.043000 audit: BPF prog-id=219 op=LOAD Jan 26 18:28:52.043000 audit[4509]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4496 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:52.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663343935383038343237376331313566613530346165346638653533 Jan 26 18:28:52.043000 audit: BPF prog-id=219 op=UNLOAD Jan 26 18:28:52.043000 audit[4509]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4496 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:52.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663343935383038343237376331313566613530346165346638653533 Jan 26 18:28:52.043000 audit: BPF prog-id=218 op=UNLOAD Jan 26 18:28:52.043000 audit[4509]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4496 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:52.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663343935383038343237376331313566613530346165346638653533 Jan 26 18:28:52.043000 audit: BPF prog-id=220 op=LOAD Jan 26 18:28:52.043000 audit[4509]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4496 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:52.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663343935383038343237376331313566613530346165346638653533 Jan 26 18:28:52.181439 systemd[1]: Started cri-containerd-615bab6ca1d2e97a7a68516999bcb1ed62acfaf532db6ef89549ff26a6676234.scope - libcontainer container 615bab6ca1d2e97a7a68516999bcb1ed62acfaf532db6ef89549ff26a6676234. Jan 26 18:28:52.259000 audit: BPF prog-id=221 op=LOAD Jan 26 18:28:52.260000 audit: BPF prog-id=222 op=LOAD Jan 26 18:28:52.260000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4546 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:52.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356261623663613164326539376137613638353136393939626362 Jan 26 18:28:52.260000 audit: BPF prog-id=222 op=UNLOAD Jan 26 18:28:52.260000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4546 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:52.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356261623663613164326539376137613638353136393939626362 Jan 26 18:28:52.261000 audit: BPF prog-id=223 op=LOAD Jan 26 18:28:52.261000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4546 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:52.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356261623663613164326539376137613638353136393939626362 Jan 26 18:28:52.261000 audit: BPF prog-id=224 op=LOAD Jan 26 18:28:52.261000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4546 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:52.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356261623663613164326539376137613638353136393939626362 Jan 26 18:28:52.261000 audit: BPF prog-id=224 op=UNLOAD Jan 26 18:28:52.261000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4546 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:52.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356261623663613164326539376137613638353136393939626362 Jan 26 18:28:52.262000 audit: BPF prog-id=223 op=UNLOAD Jan 26 18:28:52.262000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4546 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:52.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356261623663613164326539376137613638353136393939626362 Jan 26 18:28:52.262000 audit: BPF prog-id=225 op=LOAD Jan 26 18:28:52.262000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4546 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:52.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356261623663613164326539376137613638353136393939626362 Jan 26 18:28:52.265240 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 26 18:28:52.308205 containerd[1602]: time="2026-01-26T18:28:52.307306461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8vs84,Uid:2234becc-c720-462c-a720-e90ca2659bbd,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d\"" Jan 26 18:28:52.316892 kubelet[2838]: E0126 18:28:52.316647 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:52.330904 containerd[1602]: time="2026-01-26T18:28:52.330513579Z" level=info msg="CreateContainer within sandbox \"6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 26 18:28:52.402186 systemd-networkd[1514]: vxlan.calico: Gained IPv6LL Jan 26 18:28:52.429516 containerd[1602]: time="2026-01-26T18:28:52.429322030Z" level=info msg="Container 04e21eafbae36b8a4bbe6105535ea1ff2a8da451fa74be999529f5aaea39aeff: CDI devices from CRI Config.CDIDevices: []" Jan 26 18:28:52.441520 containerd[1602]: time="2026-01-26T18:28:52.437316378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56d588489d-lsq6l,Uid:5f250e57-76e7-4282-9d3b-aa7149c84f3a,Namespace:calico-system,Attempt:0,} returns sandbox id \"615bab6ca1d2e97a7a68516999bcb1ed62acfaf532db6ef89549ff26a6676234\"" Jan 26 18:28:52.442842 containerd[1602]: time="2026-01-26T18:28:52.439187767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gzr9m,Uid:e99188ce-3ac3-4524-8689-b68793ad3ef1,Namespace:calico-system,Attempt:0,}" Jan 26 18:28:52.455934 containerd[1602]: time="2026-01-26T18:28:52.454644023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 26 18:28:52.468648 containerd[1602]: time="2026-01-26T18:28:52.468296654Z" level=info msg="CreateContainer within sandbox \"6c4958084277c115fa504ae4f8e53c2e3b4b2998c28abe2424e52c8958ed7d4d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"04e21eafbae36b8a4bbe6105535ea1ff2a8da451fa74be999529f5aaea39aeff\"" Jan 26 18:28:52.482621 containerd[1602]: time="2026-01-26T18:28:52.482507214Z" level=info msg="StartContainer for \"04e21eafbae36b8a4bbe6105535ea1ff2a8da451fa74be999529f5aaea39aeff\"" Jan 26 18:28:52.492700 containerd[1602]: time="2026-01-26T18:28:52.492315062Z" level=info msg="connecting to shim 04e21eafbae36b8a4bbe6105535ea1ff2a8da451fa74be999529f5aaea39aeff" address="unix:///run/containerd/s/96669645ef150a0e6fe9436b0f3766081e06402fdd9bcf98fb916985fed03d18" protocol=ttrpc version=3 Jan 26 18:28:52.577307 systemd[1]: Started cri-containerd-04e21eafbae36b8a4bbe6105535ea1ff2a8da451fa74be999529f5aaea39aeff.scope - libcontainer container 04e21eafbae36b8a4bbe6105535ea1ff2a8da451fa74be999529f5aaea39aeff. Jan 26 18:28:52.623408 containerd[1602]: time="2026-01-26T18:28:52.623283589Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:28:52.633652 containerd[1602]: time="2026-01-26T18:28:52.633299053Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 26 18:28:52.633652 containerd[1602]: time="2026-01-26T18:28:52.633522580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 26 18:28:52.635556 kubelet[2838]: E0126 18:28:52.635143 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 26 18:28:52.635556 kubelet[2838]: E0126 18:28:52.635253 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 26 18:28:52.635556 kubelet[2838]: E0126 18:28:52.635476 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vm9gk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-56d588489d-lsq6l_calico-system(5f250e57-76e7-4282-9d3b-aa7149c84f3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 26 18:28:52.637017 kubelet[2838]: E0126 18:28:52.636947 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56d588489d-lsq6l" podUID="5f250e57-76e7-4282-9d3b-aa7149c84f3a" Jan 26 18:28:52.644000 audit: BPF prog-id=226 op=LOAD Jan 26 18:28:52.646000 audit: BPF prog-id=227 op=LOAD Jan 26 18:28:52.646000 audit[4604]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4496 pid=4604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:52.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034653231656166626165333662386134626265363130353533356561 Jan 26 18:28:52.646000 audit: BPF prog-id=227 op=UNLOAD Jan 26 18:28:52.646000 audit[4604]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4496 pid=4604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:52.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034653231656166626165333662386134626265363130353533356561 Jan 26 18:28:52.646000 audit: BPF prog-id=228 op=LOAD Jan 26 18:28:52.646000 audit[4604]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4496 pid=4604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:52.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034653231656166626165333662386134626265363130353533356561 Jan 26 18:28:52.646000 audit: BPF prog-id=229 op=LOAD Jan 26 18:28:52.646000 audit[4604]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4496 pid=4604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:52.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034653231656166626165333662386134626265363130353533356561 Jan 26 18:28:52.646000 audit: BPF prog-id=229 op=UNLOAD Jan 26 18:28:52.646000 audit[4604]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4496 pid=4604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:52.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034653231656166626165333662386134626265363130353533356561 Jan 26 18:28:52.646000 audit: BPF prog-id=228 op=UNLOAD Jan 26 18:28:52.646000 audit[4604]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4496 pid=4604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:52.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034653231656166626165333662386134626265363130353533356561 Jan 26 18:28:52.646000 audit: BPF prog-id=230 op=LOAD Jan 26 18:28:52.646000 audit[4604]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4496 pid=4604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:52.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034653231656166626165333662386134626265363130353533356561 Jan 26 18:28:52.745100 containerd[1602]: time="2026-01-26T18:28:52.744198059Z" level=info msg="StartContainer for \"04e21eafbae36b8a4bbe6105535ea1ff2a8da451fa74be999529f5aaea39aeff\" returns successfully" Jan 26 18:28:52.879891 systemd-networkd[1514]: cali5a112754501: Link UP Jan 26 18:28:52.884281 systemd-networkd[1514]: cali5a112754501: Gained carrier Jan 26 18:28:52.917886 containerd[1602]: 2026-01-26 18:28:52.688 [INFO][4593] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--gzr9m-eth0 csi-node-driver- calico-system e99188ce-3ac3-4524-8689-b68793ad3ef1 756 0 2026-01-26 18:28:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-gzr9m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5a112754501 [] [] }} ContainerID="8e107d8f8ea0ce5e6e7a351e12af964261a4644699c5898e932c8130377cd503" Namespace="calico-system" Pod="csi-node-driver-gzr9m" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzr9m-" Jan 26 18:28:52.917886 containerd[1602]: 2026-01-26 18:28:52.689 [INFO][4593] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e107d8f8ea0ce5e6e7a351e12af964261a4644699c5898e932c8130377cd503" Namespace="calico-system" Pod="csi-node-driver-gzr9m" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzr9m-eth0" Jan 26 18:28:52.917886 containerd[1602]: 2026-01-26 18:28:52.780 [INFO][4632] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e107d8f8ea0ce5e6e7a351e12af964261a4644699c5898e932c8130377cd503" HandleID="k8s-pod-network.8e107d8f8ea0ce5e6e7a351e12af964261a4644699c5898e932c8130377cd503" Workload="localhost-k8s-csi--node--driver--gzr9m-eth0" Jan 26 18:28:52.917886 containerd[1602]: 2026-01-26 18:28:52.781 [INFO][4632] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8e107d8f8ea0ce5e6e7a351e12af964261a4644699c5898e932c8130377cd503" HandleID="k8s-pod-network.8e107d8f8ea0ce5e6e7a351e12af964261a4644699c5898e932c8130377cd503" Workload="localhost-k8s-csi--node--driver--gzr9m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003438f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-gzr9m", "timestamp":"2026-01-26 18:28:52.780923029 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 26 18:28:52.917886 containerd[1602]: 2026-01-26 18:28:52.782 [INFO][4632] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 26 18:28:52.917886 containerd[1602]: 2026-01-26 18:28:52.782 [INFO][4632] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 26 18:28:52.917886 containerd[1602]: 2026-01-26 18:28:52.782 [INFO][4632] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 26 18:28:52.917886 containerd[1602]: 2026-01-26 18:28:52.799 [INFO][4632] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e107d8f8ea0ce5e6e7a351e12af964261a4644699c5898e932c8130377cd503" host="localhost" Jan 26 18:28:52.917886 containerd[1602]: 2026-01-26 18:28:52.812 [INFO][4632] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 26 18:28:52.917886 containerd[1602]: 2026-01-26 18:28:52.828 [INFO][4632] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 26 18:28:52.917886 containerd[1602]: 2026-01-26 18:28:52.833 [INFO][4632] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 26 18:28:52.917886 containerd[1602]: 2026-01-26 18:28:52.837 [INFO][4632] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 26 18:28:52.917886 containerd[1602]: 2026-01-26 18:28:52.837 [INFO][4632] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8e107d8f8ea0ce5e6e7a351e12af964261a4644699c5898e932c8130377cd503" host="localhost" Jan 26 18:28:52.917886 containerd[1602]: 2026-01-26 18:28:52.840 [INFO][4632] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8e107d8f8ea0ce5e6e7a351e12af964261a4644699c5898e932c8130377cd503 Jan 26 18:28:52.917886 containerd[1602]: 2026-01-26 18:28:52.849 [INFO][4632] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8e107d8f8ea0ce5e6e7a351e12af964261a4644699c5898e932c8130377cd503" host="localhost" Jan 26 18:28:52.917886 containerd[1602]: 2026-01-26 18:28:52.858 [INFO][4632] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8e107d8f8ea0ce5e6e7a351e12af964261a4644699c5898e932c8130377cd503" host="localhost" Jan 26 18:28:52.917886 containerd[1602]: 2026-01-26 18:28:52.859 [INFO][4632] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8e107d8f8ea0ce5e6e7a351e12af964261a4644699c5898e932c8130377cd503" host="localhost" Jan 26 18:28:52.917886 containerd[1602]: 2026-01-26 18:28:52.859 [INFO][4632] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 26 18:28:52.917886 containerd[1602]: 2026-01-26 18:28:52.859 [INFO][4632] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8e107d8f8ea0ce5e6e7a351e12af964261a4644699c5898e932c8130377cd503" HandleID="k8s-pod-network.8e107d8f8ea0ce5e6e7a351e12af964261a4644699c5898e932c8130377cd503" Workload="localhost-k8s-csi--node--driver--gzr9m-eth0" Jan 26 18:28:52.918686 containerd[1602]: 2026-01-26 18:28:52.866 [INFO][4593] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e107d8f8ea0ce5e6e7a351e12af964261a4644699c5898e932c8130377cd503" Namespace="calico-system" Pod="csi-node-driver-gzr9m" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzr9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gzr9m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e99188ce-3ac3-4524-8689-b68793ad3ef1", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 28, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-gzr9m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5a112754501", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:28:52.918686 containerd[1602]: 2026-01-26 18:28:52.866 [INFO][4593] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8e107d8f8ea0ce5e6e7a351e12af964261a4644699c5898e932c8130377cd503" Namespace="calico-system" Pod="csi-node-driver-gzr9m" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzr9m-eth0" Jan 26 18:28:52.918686 containerd[1602]: 2026-01-26 18:28:52.866 [INFO][4593] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a112754501 ContainerID="8e107d8f8ea0ce5e6e7a351e12af964261a4644699c5898e932c8130377cd503" Namespace="calico-system" Pod="csi-node-driver-gzr9m" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzr9m-eth0" Jan 26 18:28:52.918686 containerd[1602]: 2026-01-26 18:28:52.886 [INFO][4593] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e107d8f8ea0ce5e6e7a351e12af964261a4644699c5898e932c8130377cd503" Namespace="calico-system" Pod="csi-node-driver-gzr9m" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzr9m-eth0" Jan 26 18:28:52.918686 containerd[1602]: 2026-01-26 18:28:52.887 [INFO][4593] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e107d8f8ea0ce5e6e7a351e12af964261a4644699c5898e932c8130377cd503" Namespace="calico-system" Pod="csi-node-driver-gzr9m" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzr9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gzr9m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e99188ce-3ac3-4524-8689-b68793ad3ef1", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 28, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e107d8f8ea0ce5e6e7a351e12af964261a4644699c5898e932c8130377cd503", Pod:"csi-node-driver-gzr9m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5a112754501", MAC:"4a:05:6e:69:7b:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:28:52.918686 containerd[1602]: 2026-01-26 18:28:52.908 [INFO][4593] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e107d8f8ea0ce5e6e7a351e12af964261a4644699c5898e932c8130377cd503" Namespace="calico-system" Pod="csi-node-driver-gzr9m" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzr9m-eth0" Jan 26 18:28:52.977101 containerd[1602]: time="2026-01-26T18:28:52.976451819Z" level=info msg="connecting to shim 8e107d8f8ea0ce5e6e7a351e12af964261a4644699c5898e932c8130377cd503" address="unix:///run/containerd/s/a3b6810d9beaa00116043f4d86b4ce0fadc654b0ffd8cc4635cc188c915446b4" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:28:53.019000 audit[4669]: NETFILTER_CFG table=filter:131 family=2 entries=48 op=nft_register_chain pid=4669 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 26 18:28:53.019000 audit[4669]: SYSCALL arch=c000003e syscall=46 success=yes exit=23140 a0=3 a1=7ffe12fb17d0 a2=0 a3=7ffe12fb17bc items=0 ppid=4085 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:53.019000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 26 18:28:53.069991 kubelet[2838]: E0126 18:28:53.069887 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:53.088648 systemd[1]: Started cri-containerd-8e107d8f8ea0ce5e6e7a351e12af964261a4644699c5898e932c8130377cd503.scope - libcontainer container 8e107d8f8ea0ce5e6e7a351e12af964261a4644699c5898e932c8130377cd503. Jan 26 18:28:53.093874 kubelet[2838]: E0126 18:28:53.093147 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56d588489d-lsq6l" podUID="5f250e57-76e7-4282-9d3b-aa7149c84f3a" Jan 26 18:28:53.166000 audit: BPF prog-id=231 op=LOAD Jan 26 18:28:53.167000 audit: BPF prog-id=232 op=LOAD Jan 26 18:28:53.167000 audit[4677]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4665 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:53.167000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865313037643866386561306365356536653761333531653132616639 Jan 26 18:28:53.168000 audit: BPF prog-id=232 op=UNLOAD Jan 26 18:28:53.168000 audit[4677]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4665 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:53.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865313037643866386561306365356536653761333531653132616639 Jan 26 18:28:53.168000 audit: BPF prog-id=233 op=LOAD Jan 26 18:28:53.171448 kubelet[2838]: I0126 18:28:53.170036 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8vs84" podStartSLOduration=56.170016223 podStartE2EDuration="56.170016223s" podCreationTimestamp="2026-01-26 18:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:28:53.119978511 +0000 UTC m=+60.187946842" watchObservedRunningTime="2026-01-26 18:28:53.170016223 +0000 UTC m=+60.237984554" Jan 26 18:28:53.168000 audit[4677]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4665 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:53.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865313037643866386561306365356536653761333531653132616639 Jan 26 18:28:53.172000 audit: BPF prog-id=234 op=LOAD Jan 26 18:28:53.172000 audit[4677]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4665 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:53.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865313037643866386561306365356536653761333531653132616639 Jan 26 18:28:53.172000 audit: BPF prog-id=234 op=UNLOAD Jan 26 18:28:53.172000 audit[4677]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4665 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:53.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865313037643866386561306365356536653761333531653132616639 Jan 26 18:28:53.172000 audit: BPF prog-id=233 op=UNLOAD Jan 26 18:28:53.172000 audit[4677]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4665 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:53.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865313037643866386561306365356536653761333531653132616639 Jan 26 18:28:53.172000 audit: BPF prog-id=235 op=LOAD Jan 26 18:28:53.172000 audit[4677]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4665 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:53.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865313037643866386561306365356536653761333531653132616639 Jan 26 18:28:53.184851 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 26 18:28:53.207000 audit[4699]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=4699 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:53.207000 audit[4699]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe21e26d00 a2=0 a3=7ffe21e26cec items=0 ppid=3000 pid=4699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:53.207000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:53.214000 audit[4699]: NETFILTER_CFG table=nat:133 family=2 entries=14 op=nft_register_rule pid=4699 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:53.214000 audit[4699]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe21e26d00 a2=0 a3=0 items=0 ppid=3000 pid=4699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:53.214000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:53.225149 systemd-networkd[1514]: califc6161df110: Gained IPv6LL Jan 26 18:28:53.271906 containerd[1602]: time="2026-01-26T18:28:53.271217853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gzr9m,Uid:e99188ce-3ac3-4524-8689-b68793ad3ef1,Namespace:calico-system,Attempt:0,} returns sandbox id \"8e107d8f8ea0ce5e6e7a351e12af964261a4644699c5898e932c8130377cd503\"" Jan 26 18:28:53.281974 containerd[1602]: time="2026-01-26T18:28:53.281651281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 26 18:28:53.365023 containerd[1602]: time="2026-01-26T18:28:53.364672230Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:28:53.369546 containerd[1602]: time="2026-01-26T18:28:53.369171763Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 26 18:28:53.369546 containerd[1602]: time="2026-01-26T18:28:53.369271268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 26 18:28:53.369908 kubelet[2838]: E0126 18:28:53.369593 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 26 18:28:53.369908 kubelet[2838]: E0126 18:28:53.369645 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 26 18:28:53.370290 kubelet[2838]: E0126 18:28:53.370120 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5cfhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gzr9m_calico-system(e99188ce-3ac3-4524-8689-b68793ad3ef1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 26 18:28:53.382702 containerd[1602]: time="2026-01-26T18:28:53.382506976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 26 18:28:53.450184 containerd[1602]: time="2026-01-26T18:28:53.447304786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8zw8p,Uid:fb70354b-2e8e-4b1e-823d-0f04eedecec2,Namespace:calico-system,Attempt:0,}" Jan 26 18:28:53.450691 kubelet[2838]: E0126 18:28:53.448554 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:53.451878 containerd[1602]: time="2026-01-26T18:28:53.451000695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2x6sp,Uid:ff4e0446-b364-4ad8-9ef7-bce278402973,Namespace:kube-system,Attempt:0,}" Jan 26 18:28:53.453044 containerd[1602]: time="2026-01-26T18:28:53.451997779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8ccd88f4-dbcn4,Uid:bcefd4f3-4cd3-4d24-b71b-627a7a3ce855,Namespace:calico-apiserver,Attempt:0,}" Jan 26 18:28:53.529512 containerd[1602]: time="2026-01-26T18:28:53.524934794Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:28:53.529512 containerd[1602]: time="2026-01-26T18:28:53.527106966Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 26 18:28:53.529512 containerd[1602]: time="2026-01-26T18:28:53.527193136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 26 18:28:53.530565 kubelet[2838]: E0126 18:28:53.530522 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 26 18:28:53.541026 kubelet[2838]: E0126 18:28:53.539552 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 26 18:28:53.541026 kubelet[2838]: E0126 18:28:53.540929 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5cfhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gzr9m_calico-system(e99188ce-3ac3-4524-8689-b68793ad3ef1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 26 18:28:53.542287 kubelet[2838]: E0126 18:28:53.542245 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gzr9m" podUID="e99188ce-3ac3-4524-8689-b68793ad3ef1" Jan 26 18:28:53.547974 systemd-networkd[1514]: cali7855bf40a8c: Gained IPv6LL Jan 26 18:28:53.848617 systemd-networkd[1514]: cali35596b9ef00: Link UP Jan 26 18:28:53.852285 systemd-networkd[1514]: cali35596b9ef00: Gained carrier Jan 26 18:28:53.887180 containerd[1602]: 2026-01-26 18:28:53.648 [INFO][4707] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--2x6sp-eth0 coredns-674b8bbfcf- kube-system ff4e0446-b364-4ad8-9ef7-bce278402973 867 0 2026-01-26 18:27:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-2x6sp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali35596b9ef00 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87" Namespace="kube-system" Pod="coredns-674b8bbfcf-2x6sp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2x6sp-" Jan 26 18:28:53.887180 containerd[1602]: 2026-01-26 18:28:53.648 [INFO][4707] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87" Namespace="kube-system" Pod="coredns-674b8bbfcf-2x6sp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2x6sp-eth0" Jan 26 18:28:53.887180 containerd[1602]: 2026-01-26 18:28:53.732 [INFO][4753] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87" HandleID="k8s-pod-network.000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87" Workload="localhost-k8s-coredns--674b8bbfcf--2x6sp-eth0" Jan 26 18:28:53.887180 containerd[1602]: 2026-01-26 18:28:53.735 [INFO][4753] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87" HandleID="k8s-pod-network.000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87" Workload="localhost-k8s-coredns--674b8bbfcf--2x6sp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004ce5a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-2x6sp", "timestamp":"2026-01-26 18:28:53.732048086 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 26 18:28:53.887180 containerd[1602]: 2026-01-26 18:28:53.737 [INFO][4753] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 26 18:28:53.887180 containerd[1602]: 2026-01-26 18:28:53.737 [INFO][4753] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 26 18:28:53.887180 containerd[1602]: 2026-01-26 18:28:53.737 [INFO][4753] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 26 18:28:53.887180 containerd[1602]: 2026-01-26 18:28:53.754 [INFO][4753] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87" host="localhost" Jan 26 18:28:53.887180 containerd[1602]: 2026-01-26 18:28:53.769 [INFO][4753] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 26 18:28:53.887180 containerd[1602]: 2026-01-26 18:28:53.780 [INFO][4753] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 26 18:28:53.887180 containerd[1602]: 2026-01-26 18:28:53.788 [INFO][4753] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 26 18:28:53.887180 containerd[1602]: 2026-01-26 18:28:53.794 [INFO][4753] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 26 18:28:53.887180 containerd[1602]: 2026-01-26 18:28:53.796 [INFO][4753] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87" host="localhost" Jan 26 18:28:53.887180 containerd[1602]: 2026-01-26 18:28:53.799 [INFO][4753] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87 Jan 26 18:28:53.887180 containerd[1602]: 2026-01-26 18:28:53.814 [INFO][4753] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87" host="localhost" Jan 26 18:28:53.887180 containerd[1602]: 2026-01-26 18:28:53.832 [INFO][4753] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87" host="localhost" Jan 26 18:28:53.887180 containerd[1602]: 2026-01-26 18:28:53.832 [INFO][4753] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87" host="localhost" Jan 26 18:28:53.887180 containerd[1602]: 2026-01-26 18:28:53.832 [INFO][4753] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 26 18:28:53.887180 containerd[1602]: 2026-01-26 18:28:53.832 [INFO][4753] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87" HandleID="k8s-pod-network.000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87" Workload="localhost-k8s-coredns--674b8bbfcf--2x6sp-eth0" Jan 26 18:28:53.888910 containerd[1602]: 2026-01-26 18:28:53.842 [INFO][4707] cni-plugin/k8s.go 418: Populated endpoint ContainerID="000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87" Namespace="kube-system" Pod="coredns-674b8bbfcf-2x6sp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2x6sp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2x6sp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ff4e0446-b364-4ad8-9ef7-bce278402973", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-2x6sp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35596b9ef00", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:28:53.888910 containerd[1602]: 2026-01-26 18:28:53.842 [INFO][4707] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87" Namespace="kube-system" Pod="coredns-674b8bbfcf-2x6sp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2x6sp-eth0" Jan 26 18:28:53.888910 containerd[1602]: 2026-01-26 18:28:53.842 [INFO][4707] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35596b9ef00 ContainerID="000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87" Namespace="kube-system" Pod="coredns-674b8bbfcf-2x6sp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2x6sp-eth0" Jan 26 18:28:53.888910 containerd[1602]: 2026-01-26 18:28:53.851 [INFO][4707] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87" Namespace="kube-system" Pod="coredns-674b8bbfcf-2x6sp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2x6sp-eth0" Jan 26 18:28:53.888910 containerd[1602]: 2026-01-26 18:28:53.854 [INFO][4707] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87" Namespace="kube-system" Pod="coredns-674b8bbfcf-2x6sp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2x6sp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2x6sp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ff4e0446-b364-4ad8-9ef7-bce278402973", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87", Pod:"coredns-674b8bbfcf-2x6sp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35596b9ef00", MAC:"3e:3c:9a:5d:fb:3d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:28:53.888910 containerd[1602]: 2026-01-26 18:28:53.878 [INFO][4707] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87" Namespace="kube-system" Pod="coredns-674b8bbfcf-2x6sp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2x6sp-eth0" Jan 26 18:28:53.979049 containerd[1602]: time="2026-01-26T18:28:53.978496153Z" level=info msg="connecting to shim 000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87" address="unix:///run/containerd/s/ed069a43f9e1fcc2182dbc843d3c7143ec4a6bebc3ba663896ae6564db08e1e3" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:28:54.008000 audit[4795]: NETFILTER_CFG table=filter:134 family=2 entries=48 op=nft_register_chain pid=4795 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 26 18:28:54.008000 audit[4795]: SYSCALL arch=c000003e syscall=46 success=yes exit=22720 a0=3 a1=7fff885a48b0 a2=0 a3=7fff885a489c items=0 ppid=4085 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.008000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 26 18:28:54.017238 systemd-networkd[1514]: calie9b9dc9c509: Link UP Jan 26 18:28:54.020489 systemd-networkd[1514]: calie9b9dc9c509: Gained carrier Jan 26 18:28:54.079499 containerd[1602]: 2026-01-26 18:28:53.669 [INFO][4720] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--8zw8p-eth0 goldmane-666569f655- calico-system fb70354b-2e8e-4b1e-823d-0f04eedecec2 874 0 2026-01-26 18:28:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-8zw8p eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie9b9dc9c509 [] [] }} ContainerID="a747cde6677c2da5019bc3605e78a72b92df552cb30928d717fb17cdc780dec5" Namespace="calico-system" Pod="goldmane-666569f655-8zw8p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--8zw8p-" Jan 26 18:28:54.079499 containerd[1602]: 2026-01-26 18:28:53.671 [INFO][4720] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a747cde6677c2da5019bc3605e78a72b92df552cb30928d717fb17cdc780dec5" Namespace="calico-system" Pod="goldmane-666569f655-8zw8p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--8zw8p-eth0" Jan 26 18:28:54.079499 containerd[1602]: 2026-01-26 18:28:53.780 [INFO][4761] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a747cde6677c2da5019bc3605e78a72b92df552cb30928d717fb17cdc780dec5" HandleID="k8s-pod-network.a747cde6677c2da5019bc3605e78a72b92df552cb30928d717fb17cdc780dec5" Workload="localhost-k8s-goldmane--666569f655--8zw8p-eth0" Jan 26 18:28:54.079499 containerd[1602]: 2026-01-26 18:28:53.780 [INFO][4761] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a747cde6677c2da5019bc3605e78a72b92df552cb30928d717fb17cdc780dec5" HandleID="k8s-pod-network.a747cde6677c2da5019bc3605e78a72b92df552cb30928d717fb17cdc780dec5" Workload="localhost-k8s-goldmane--666569f655--8zw8p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-8zw8p", "timestamp":"2026-01-26 18:28:53.780437571 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 26 18:28:54.079499 containerd[1602]: 2026-01-26 18:28:53.780 [INFO][4761] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 26 18:28:54.079499 containerd[1602]: 2026-01-26 18:28:53.833 [INFO][4761] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 26 18:28:54.079499 containerd[1602]: 2026-01-26 18:28:53.833 [INFO][4761] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 26 18:28:54.079499 containerd[1602]: 2026-01-26 18:28:53.856 [INFO][4761] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a747cde6677c2da5019bc3605e78a72b92df552cb30928d717fb17cdc780dec5" host="localhost" Jan 26 18:28:54.079499 containerd[1602]: 2026-01-26 18:28:53.877 [INFO][4761] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 26 18:28:54.079499 containerd[1602]: 2026-01-26 18:28:53.893 [INFO][4761] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 26 18:28:54.079499 containerd[1602]: 2026-01-26 18:28:53.897 [INFO][4761] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 26 18:28:54.079499 containerd[1602]: 2026-01-26 18:28:53.903 [INFO][4761] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 26 18:28:54.079499 containerd[1602]: 2026-01-26 18:28:53.903 [INFO][4761] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a747cde6677c2da5019bc3605e78a72b92df552cb30928d717fb17cdc780dec5" host="localhost" Jan 26 18:28:54.079499 containerd[1602]: 2026-01-26 18:28:53.910 [INFO][4761] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a747cde6677c2da5019bc3605e78a72b92df552cb30928d717fb17cdc780dec5 Jan 26 18:28:54.079499 containerd[1602]: 2026-01-26 18:28:53.924 [INFO][4761] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a747cde6677c2da5019bc3605e78a72b92df552cb30928d717fb17cdc780dec5" host="localhost" Jan 26 18:28:54.079499 containerd[1602]: 2026-01-26 18:28:53.957 [INFO][4761] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.a747cde6677c2da5019bc3605e78a72b92df552cb30928d717fb17cdc780dec5" host="localhost" Jan 26 18:28:54.079499 containerd[1602]: 2026-01-26 18:28:53.958 [INFO][4761] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.a747cde6677c2da5019bc3605e78a72b92df552cb30928d717fb17cdc780dec5" host="localhost" Jan 26 18:28:54.079499 containerd[1602]: 2026-01-26 18:28:53.961 [INFO][4761] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 26 18:28:54.079499 containerd[1602]: 2026-01-26 18:28:53.969 [INFO][4761] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="a747cde6677c2da5019bc3605e78a72b92df552cb30928d717fb17cdc780dec5" HandleID="k8s-pod-network.a747cde6677c2da5019bc3605e78a72b92df552cb30928d717fb17cdc780dec5" Workload="localhost-k8s-goldmane--666569f655--8zw8p-eth0" Jan 26 18:28:54.081025 containerd[1602]: 2026-01-26 18:28:53.989 [INFO][4720] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a747cde6677c2da5019bc3605e78a72b92df552cb30928d717fb17cdc780dec5" Namespace="calico-system" Pod="goldmane-666569f655-8zw8p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--8zw8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--8zw8p-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"fb70354b-2e8e-4b1e-823d-0f04eedecec2", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 28, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-8zw8p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie9b9dc9c509", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:28:54.081025 containerd[1602]: 2026-01-26 18:28:53.989 [INFO][4720] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="a747cde6677c2da5019bc3605e78a72b92df552cb30928d717fb17cdc780dec5" Namespace="calico-system" Pod="goldmane-666569f655-8zw8p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--8zw8p-eth0" Jan 26 18:28:54.081025 containerd[1602]: 2026-01-26 18:28:53.989 [INFO][4720] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9b9dc9c509 ContainerID="a747cde6677c2da5019bc3605e78a72b92df552cb30928d717fb17cdc780dec5" Namespace="calico-system" Pod="goldmane-666569f655-8zw8p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--8zw8p-eth0" Jan 26 18:28:54.081025 containerd[1602]: 2026-01-26 18:28:54.020 [INFO][4720] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a747cde6677c2da5019bc3605e78a72b92df552cb30928d717fb17cdc780dec5" Namespace="calico-system" Pod="goldmane-666569f655-8zw8p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--8zw8p-eth0" Jan 26 18:28:54.081025 containerd[1602]: 2026-01-26 18:28:54.035 [INFO][4720] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a747cde6677c2da5019bc3605e78a72b92df552cb30928d717fb17cdc780dec5" Namespace="calico-system" Pod="goldmane-666569f655-8zw8p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--8zw8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--8zw8p-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"fb70354b-2e8e-4b1e-823d-0f04eedecec2", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 28, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a747cde6677c2da5019bc3605e78a72b92df552cb30928d717fb17cdc780dec5", Pod:"goldmane-666569f655-8zw8p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie9b9dc9c509", MAC:"e2:2e:fe:84:62:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:28:54.081025 containerd[1602]: 2026-01-26 18:28:54.073 [INFO][4720] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a747cde6677c2da5019bc3605e78a72b92df552cb30928d717fb17cdc780dec5" Namespace="calico-system" Pod="goldmane-666569f655-8zw8p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--8zw8p-eth0" Jan 26 18:28:54.148000 audit[4828]: NETFILTER_CFG table=filter:135 family=2 entries=64 op=nft_register_chain pid=4828 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 26 18:28:54.148000 audit[4828]: SYSCALL arch=c000003e syscall=46 success=yes exit=31120 a0=3 a1=7ffd86f7b020 a2=0 a3=7ffd86f7b00c items=0 ppid=4085 pid=4828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.148000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 26 18:28:54.188679 systemd[1]: Started cri-containerd-000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87.scope - libcontainer container 000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87. Jan 26 18:28:54.199505 kubelet[2838]: E0126 18:28:54.198120 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:54.221637 kubelet[2838]: E0126 18:28:54.221431 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56d588489d-lsq6l" podUID="5f250e57-76e7-4282-9d3b-aa7149c84f3a" Jan 26 18:28:54.233689 kubelet[2838]: E0126 18:28:54.233221 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gzr9m" podUID="e99188ce-3ac3-4524-8689-b68793ad3ef1" Jan 26 18:28:54.292917 systemd-networkd[1514]: caliceb46e8d16c: Link UP Jan 26 18:28:54.296000 audit: BPF prog-id=236 op=LOAD Jan 26 18:28:54.303878 systemd-networkd[1514]: caliceb46e8d16c: Gained carrier Jan 26 18:28:54.306000 audit: BPF prog-id=237 op=LOAD Jan 26 18:28:54.306000 audit[4813]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4797 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030303338386366663939386336613339393132353736323037346432 Jan 26 18:28:54.306000 audit: BPF prog-id=237 op=UNLOAD Jan 26 18:28:54.306000 audit[4813]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4797 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030303338386366663939386336613339393132353736323037346432 Jan 26 18:28:54.310942 containerd[1602]: time="2026-01-26T18:28:54.309229915Z" level=info msg="connecting to shim a747cde6677c2da5019bc3605e78a72b92df552cb30928d717fb17cdc780dec5" address="unix:///run/containerd/s/4aebab1c51eae2b2489b466deeed5b0682f118fd796bf2dc3bbe9a226fdd2603" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:28:54.311000 audit: BPF prog-id=238 op=LOAD Jan 26 18:28:54.311000 audit[4813]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4797 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030303338386366663939386336613339393132353736323037346432 Jan 26 18:28:54.312000 audit: BPF prog-id=239 op=LOAD Jan 26 18:28:54.312000 audit[4813]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4797 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030303338386366663939386336613339393132353736323037346432 Jan 26 18:28:54.314000 audit: BPF prog-id=239 op=UNLOAD Jan 26 18:28:54.314000 audit[4813]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4797 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030303338386366663939386336613339393132353736323037346432 Jan 26 18:28:54.315000 audit: BPF prog-id=238 op=UNLOAD Jan 26 18:28:54.315000 audit[4813]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4797 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030303338386366663939386336613339393132353736323037346432 Jan 26 18:28:54.316000 audit: BPF prog-id=240 op=LOAD Jan 26 18:28:54.316000 audit[4813]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4797 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030303338386366663939386336613339393132353736323037346432 Jan 26 18:28:54.327523 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 26 18:28:54.361481 containerd[1602]: 2026-01-26 18:28:53.681 [INFO][4728] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c8ccd88f4--dbcn4-eth0 calico-apiserver-6c8ccd88f4- calico-apiserver bcefd4f3-4cd3-4d24-b71b-627a7a3ce855 875 0 2026-01-26 18:28:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c8ccd88f4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c8ccd88f4-dbcn4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliceb46e8d16c [] [] }} ContainerID="5398a4cf09b156ac87f94883237d1daa8238af1aec28c062174f21df1a33c154" Namespace="calico-apiserver" Pod="calico-apiserver-6c8ccd88f4-dbcn4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8ccd88f4--dbcn4-" Jan 26 18:28:54.361481 containerd[1602]: 2026-01-26 18:28:53.681 [INFO][4728] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5398a4cf09b156ac87f94883237d1daa8238af1aec28c062174f21df1a33c154" Namespace="calico-apiserver" Pod="calico-apiserver-6c8ccd88f4-dbcn4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8ccd88f4--dbcn4-eth0" Jan 26 18:28:54.361481 containerd[1602]: 2026-01-26 18:28:53.817 [INFO][4768] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5398a4cf09b156ac87f94883237d1daa8238af1aec28c062174f21df1a33c154" HandleID="k8s-pod-network.5398a4cf09b156ac87f94883237d1daa8238af1aec28c062174f21df1a33c154" Workload="localhost-k8s-calico--apiserver--6c8ccd88f4--dbcn4-eth0" Jan 26 18:28:54.361481 containerd[1602]: 2026-01-26 18:28:53.817 [INFO][4768] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5398a4cf09b156ac87f94883237d1daa8238af1aec28c062174f21df1a33c154" HandleID="k8s-pod-network.5398a4cf09b156ac87f94883237d1daa8238af1aec28c062174f21df1a33c154" Workload="localhost-k8s-calico--apiserver--6c8ccd88f4--dbcn4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019b780), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c8ccd88f4-dbcn4", "timestamp":"2026-01-26 18:28:53.817228489 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 26 18:28:54.361481 containerd[1602]: 2026-01-26 18:28:53.818 [INFO][4768] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 26 18:28:54.361481 containerd[1602]: 2026-01-26 18:28:53.962 [INFO][4768] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 26 18:28:54.361481 containerd[1602]: 2026-01-26 18:28:53.969 [INFO][4768] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 26 18:28:54.361481 containerd[1602]: 2026-01-26 18:28:53.991 [INFO][4768] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5398a4cf09b156ac87f94883237d1daa8238af1aec28c062174f21df1a33c154" host="localhost" Jan 26 18:28:54.361481 containerd[1602]: 2026-01-26 18:28:54.002 [INFO][4768] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 26 18:28:54.361481 containerd[1602]: 2026-01-26 18:28:54.025 [INFO][4768] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 26 18:28:54.361481 containerd[1602]: 2026-01-26 18:28:54.087 [INFO][4768] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 26 18:28:54.361481 containerd[1602]: 2026-01-26 18:28:54.128 [INFO][4768] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 26 18:28:54.361481 containerd[1602]: 2026-01-26 18:28:54.130 [INFO][4768] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5398a4cf09b156ac87f94883237d1daa8238af1aec28c062174f21df1a33c154" host="localhost" Jan 26 18:28:54.361481 containerd[1602]: 2026-01-26 18:28:54.139 [INFO][4768] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5398a4cf09b156ac87f94883237d1daa8238af1aec28c062174f21df1a33c154 Jan 26 18:28:54.361481 containerd[1602]: 2026-01-26 18:28:54.156 [INFO][4768] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5398a4cf09b156ac87f94883237d1daa8238af1aec28c062174f21df1a33c154" host="localhost" Jan 26 18:28:54.361481 containerd[1602]: 2026-01-26 18:28:54.202 [INFO][4768] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.5398a4cf09b156ac87f94883237d1daa8238af1aec28c062174f21df1a33c154" host="localhost" Jan 26 18:28:54.361481 containerd[1602]: 2026-01-26 18:28:54.210 [INFO][4768] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.5398a4cf09b156ac87f94883237d1daa8238af1aec28c062174f21df1a33c154" host="localhost" Jan 26 18:28:54.361481 containerd[1602]: 2026-01-26 18:28:54.210 [INFO][4768] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 26 18:28:54.361481 containerd[1602]: 2026-01-26 18:28:54.210 [INFO][4768] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="5398a4cf09b156ac87f94883237d1daa8238af1aec28c062174f21df1a33c154" HandleID="k8s-pod-network.5398a4cf09b156ac87f94883237d1daa8238af1aec28c062174f21df1a33c154" Workload="localhost-k8s-calico--apiserver--6c8ccd88f4--dbcn4-eth0" Jan 26 18:28:54.362621 containerd[1602]: 2026-01-26 18:28:54.271 [INFO][4728] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5398a4cf09b156ac87f94883237d1daa8238af1aec28c062174f21df1a33c154" Namespace="calico-apiserver" Pod="calico-apiserver-6c8ccd88f4-dbcn4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8ccd88f4--dbcn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c8ccd88f4--dbcn4-eth0", GenerateName:"calico-apiserver-6c8ccd88f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"bcefd4f3-4cd3-4d24-b71b-627a7a3ce855", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 28, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c8ccd88f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c8ccd88f4-dbcn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliceb46e8d16c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:28:54.362621 containerd[1602]: 2026-01-26 18:28:54.271 [INFO][4728] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="5398a4cf09b156ac87f94883237d1daa8238af1aec28c062174f21df1a33c154" Namespace="calico-apiserver" Pod="calico-apiserver-6c8ccd88f4-dbcn4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8ccd88f4--dbcn4-eth0" Jan 26 18:28:54.362621 containerd[1602]: 2026-01-26 18:28:54.271 [INFO][4728] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliceb46e8d16c ContainerID="5398a4cf09b156ac87f94883237d1daa8238af1aec28c062174f21df1a33c154" Namespace="calico-apiserver" Pod="calico-apiserver-6c8ccd88f4-dbcn4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8ccd88f4--dbcn4-eth0" Jan 26 18:28:54.362621 containerd[1602]: 2026-01-26 18:28:54.310 [INFO][4728] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5398a4cf09b156ac87f94883237d1daa8238af1aec28c062174f21df1a33c154" Namespace="calico-apiserver" Pod="calico-apiserver-6c8ccd88f4-dbcn4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8ccd88f4--dbcn4-eth0" Jan 26 18:28:54.362621 containerd[1602]: 2026-01-26 18:28:54.324 [INFO][4728] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5398a4cf09b156ac87f94883237d1daa8238af1aec28c062174f21df1a33c154" Namespace="calico-apiserver" Pod="calico-apiserver-6c8ccd88f4-dbcn4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8ccd88f4--dbcn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c8ccd88f4--dbcn4-eth0", GenerateName:"calico-apiserver-6c8ccd88f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"bcefd4f3-4cd3-4d24-b71b-627a7a3ce855", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 28, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c8ccd88f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5398a4cf09b156ac87f94883237d1daa8238af1aec28c062174f21df1a33c154", Pod:"calico-apiserver-6c8ccd88f4-dbcn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliceb46e8d16c", MAC:"f2:33:5e:d3:70:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:28:54.362621 containerd[1602]: 2026-01-26 18:28:54.354 [INFO][4728] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5398a4cf09b156ac87f94883237d1daa8238af1aec28c062174f21df1a33c154" Namespace="calico-apiserver" Pod="calico-apiserver-6c8ccd88f4-dbcn4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8ccd88f4--dbcn4-eth0" Jan 26 18:28:54.426000 audit[4886]: NETFILTER_CFG table=filter:136 family=2 entries=17 op=nft_register_rule pid=4886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:54.426000 audit[4886]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcd6d55470 a2=0 a3=7ffcd6d5545c items=0 ppid=3000 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.426000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:54.463046 containerd[1602]: time="2026-01-26T18:28:54.462957661Z" level=info msg="connecting to shim 5398a4cf09b156ac87f94883237d1daa8238af1aec28c062174f21df1a33c154" address="unix:///run/containerd/s/0fc2fb0b4468ff8c39871b226c0992cce8528579db73719fab158a7cb970712d" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:28:54.491000 audit[4886]: NETFILTER_CFG table=nat:137 family=2 entries=35 op=nft_register_chain pid=4886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:54.491000 audit[4886]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffcd6d55470 a2=0 a3=7ffcd6d5545c items=0 ppid=3000 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.491000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:54.492000 audit[4910]: NETFILTER_CFG table=filter:138 family=2 entries=67 op=nft_register_chain pid=4910 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 26 18:28:54.492000 audit[4910]: SYSCALL arch=c000003e syscall=46 success=yes exit=31868 a0=3 a1=7fff688b9d30 a2=0 a3=5622766bc000 items=0 ppid=4085 pid=4910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.492000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 26 18:28:54.503006 systemd[1]: Started cri-containerd-a747cde6677c2da5019bc3605e78a72b92df552cb30928d717fb17cdc780dec5.scope - libcontainer container a747cde6677c2da5019bc3605e78a72b92df552cb30928d717fb17cdc780dec5. Jan 26 18:28:54.569648 containerd[1602]: time="2026-01-26T18:28:54.569513079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2x6sp,Uid:ff4e0446-b364-4ad8-9ef7-bce278402973,Namespace:kube-system,Attempt:0,} returns sandbox id \"000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87\"" Jan 26 18:28:54.577452 kubelet[2838]: E0126 18:28:54.576452 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:54.590002 containerd[1602]: time="2026-01-26T18:28:54.589957588Z" level=info msg="CreateContainer within sandbox \"000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 26 18:28:54.616000 audit: BPF prog-id=241 op=LOAD Jan 26 18:28:54.618000 audit: BPF prog-id=242 op=LOAD Jan 26 18:28:54.618000 audit[4865]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4853 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137343763646536363737633264613530313962633336303565373861 Jan 26 18:28:54.618000 audit: BPF prog-id=242 op=UNLOAD Jan 26 18:28:54.618000 audit[4865]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4853 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137343763646536363737633264613530313962633336303565373861 Jan 26 18:28:54.628000 audit: BPF prog-id=243 op=LOAD Jan 26 18:28:54.628000 audit[4865]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4853 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137343763646536363737633264613530313962633336303565373861 Jan 26 18:28:54.631000 audit: BPF prog-id=244 op=LOAD Jan 26 18:28:54.631000 audit[4865]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4853 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137343763646536363737633264613530313962633336303565373861 Jan 26 18:28:54.631000 audit: BPF prog-id=244 op=UNLOAD Jan 26 18:28:54.631000 audit[4865]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4853 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137343763646536363737633264613530313962633336303565373861 Jan 26 18:28:54.631000 audit: BPF prog-id=243 op=UNLOAD Jan 26 18:28:54.631000 audit[4865]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4853 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137343763646536363737633264613530313962633336303565373861 Jan 26 18:28:54.632000 audit: BPF prog-id=245 op=LOAD Jan 26 18:28:54.632000 audit[4865]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4853 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137343763646536363737633264613530313962633336303565373861 Jan 26 18:28:54.640546 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 26 18:28:54.656537 systemd[1]: Started cri-containerd-5398a4cf09b156ac87f94883237d1daa8238af1aec28c062174f21df1a33c154.scope - libcontainer container 5398a4cf09b156ac87f94883237d1daa8238af1aec28c062174f21df1a33c154. Jan 26 18:28:54.659938 containerd[1602]: time="2026-01-26T18:28:54.659711899Z" level=info msg="Container 006f93f09667ef421a542e69faaab9dc264e8fd78eef506b23c39b10e9943b41: CDI devices from CRI Config.CDIDevices: []" Jan 26 18:28:54.695136 containerd[1602]: time="2026-01-26T18:28:54.695094011Z" level=info msg="CreateContainer within sandbox \"000388cff998c6a399125762074d27954f185ad38590d1aec4c263302ae6cc87\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"006f93f09667ef421a542e69faaab9dc264e8fd78eef506b23c39b10e9943b41\"" Jan 26 18:28:54.698667 containerd[1602]: time="2026-01-26T18:28:54.698317480Z" level=info msg="StartContainer for \"006f93f09667ef421a542e69faaab9dc264e8fd78eef506b23c39b10e9943b41\"" Jan 26 18:28:54.709179 containerd[1602]: time="2026-01-26T18:28:54.708094534Z" level=info msg="connecting to shim 006f93f09667ef421a542e69faaab9dc264e8fd78eef506b23c39b10e9943b41" address="unix:///run/containerd/s/ed069a43f9e1fcc2182dbc843d3c7143ec4a6bebc3ba663896ae6564db08e1e3" protocol=ttrpc version=3 Jan 26 18:28:54.752000 audit: BPF prog-id=246 op=LOAD Jan 26 18:28:54.754000 audit: BPF prog-id=247 op=LOAD Jan 26 18:28:54.754000 audit[4915]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4895 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533393861346366303962313536616338376639343838333233376431 Jan 26 18:28:54.755000 audit: BPF prog-id=247 op=UNLOAD Jan 26 18:28:54.755000 audit[4915]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4895 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533393861346366303962313536616338376639343838333233376431 Jan 26 18:28:54.756000 audit: BPF prog-id=248 op=LOAD Jan 26 18:28:54.756000 audit[4915]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4895 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533393861346366303962313536616338376639343838333233376431 Jan 26 18:28:54.757000 audit: BPF prog-id=249 op=LOAD Jan 26 18:28:54.757000 audit[4915]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4895 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533393861346366303962313536616338376639343838333233376431 Jan 26 18:28:54.758000 audit: BPF prog-id=249 op=UNLOAD Jan 26 18:28:54.758000 audit[4915]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4895 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533393861346366303962313536616338376639343838333233376431 Jan 26 18:28:54.758000 audit: BPF prog-id=248 op=UNLOAD Jan 26 18:28:54.758000 audit[4915]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4895 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533393861346366303962313536616338376639343838333233376431 Jan 26 18:28:54.758000 audit: BPF prog-id=250 op=LOAD Jan 26 18:28:54.758000 audit[4915]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4895 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533393861346366303962313536616338376639343838333233376431 Jan 26 18:28:54.772951 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 26 18:28:54.796630 containerd[1602]: time="2026-01-26T18:28:54.796507261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8zw8p,Uid:fb70354b-2e8e-4b1e-823d-0f04eedecec2,Namespace:calico-system,Attempt:0,} returns sandbox id \"a747cde6677c2da5019bc3605e78a72b92df552cb30928d717fb17cdc780dec5\"" Jan 26 18:28:54.803516 systemd[1]: Started cri-containerd-006f93f09667ef421a542e69faaab9dc264e8fd78eef506b23c39b10e9943b41.scope - libcontainer container 006f93f09667ef421a542e69faaab9dc264e8fd78eef506b23c39b10e9943b41. Jan 26 18:28:54.804155 containerd[1602]: time="2026-01-26T18:28:54.803995154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 26 18:28:54.852000 audit: BPF prog-id=251 op=LOAD Jan 26 18:28:54.855000 audit: BPF prog-id=252 op=LOAD Jan 26 18:28:54.855000 audit[4943]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4797 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030366639336630393636376566343231613534326536396661616162 Jan 26 18:28:54.855000 audit: BPF prog-id=252 op=UNLOAD Jan 26 18:28:54.855000 audit[4943]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4797 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030366639336630393636376566343231613534326536396661616162 Jan 26 18:28:54.856000 audit: BPF prog-id=253 op=LOAD Jan 26 18:28:54.856000 audit[4943]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4797 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030366639336630393636376566343231613534326536396661616162 Jan 26 18:28:54.858000 audit: BPF prog-id=254 op=LOAD Jan 26 18:28:54.858000 audit[4943]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4797 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030366639336630393636376566343231613534326536396661616162 Jan 26 18:28:54.862000 audit: BPF prog-id=254 op=UNLOAD Jan 26 18:28:54.862000 audit[4943]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4797 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030366639336630393636376566343231613534326536396661616162 Jan 26 18:28:54.862000 audit: BPF prog-id=253 op=UNLOAD Jan 26 18:28:54.862000 audit[4943]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4797 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030366639336630393636376566343231613534326536396661616162 Jan 26 18:28:54.863000 audit: BPF prog-id=255 op=LOAD Jan 26 18:28:54.863000 audit[4943]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4797 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:54.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030366639336630393636376566343231613534326536396661616162 Jan 26 18:28:54.890161 systemd-networkd[1514]: cali35596b9ef00: Gained IPv6LL Jan 26 18:28:54.907450 containerd[1602]: time="2026-01-26T18:28:54.907404482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8ccd88f4-dbcn4,Uid:bcefd4f3-4cd3-4d24-b71b-627a7a3ce855,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5398a4cf09b156ac87f94883237d1daa8238af1aec28c062174f21df1a33c154\"" Jan 26 18:28:54.923503 containerd[1602]: time="2026-01-26T18:28:54.923219873Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:28:54.929854 containerd[1602]: time="2026-01-26T18:28:54.928477447Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 26 18:28:54.929854 containerd[1602]: time="2026-01-26T18:28:54.928720489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 26 18:28:54.931056 kubelet[2838]: E0126 18:28:54.931024 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 26 18:28:54.931657 kubelet[2838]: E0126 18:28:54.931173 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 26 18:28:54.931926 kubelet[2838]: E0126 18:28:54.931641 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqcn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8zw8p_calico-system(fb70354b-2e8e-4b1e-823d-0f04eedecec2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 26 18:28:54.932160 containerd[1602]: time="2026-01-26T18:28:54.931646876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 26 18:28:54.933877 kubelet[2838]: E0126 18:28:54.932986 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zw8p" podUID="fb70354b-2e8e-4b1e-823d-0f04eedecec2" Jan 26 18:28:54.951577 containerd[1602]: time="2026-01-26T18:28:54.951237541Z" level=info msg="StartContainer for \"006f93f09667ef421a542e69faaab9dc264e8fd78eef506b23c39b10e9943b41\" returns successfully" Jan 26 18:28:54.953191 systemd-networkd[1514]: cali5a112754501: Gained IPv6LL Jan 26 18:28:55.024582 containerd[1602]: time="2026-01-26T18:28:55.023981540Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:28:55.027457 containerd[1602]: time="2026-01-26T18:28:55.026575898Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 26 18:28:55.027576 containerd[1602]: time="2026-01-26T18:28:55.027503164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 26 18:28:55.030641 kubelet[2838]: E0126 18:28:55.030525 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:28:55.030641 kubelet[2838]: E0126 18:28:55.030578 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:28:55.031281 kubelet[2838]: E0126 18:28:55.031063 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfdrn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8ccd88f4-dbcn4_calico-apiserver(bcefd4f3-4cd3-4d24-b71b-627a7a3ce855): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 26 18:28:55.033132 kubelet[2838]: E0126 18:28:55.032553 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-dbcn4" podUID="bcefd4f3-4cd3-4d24-b71b-627a7a3ce855" Jan 26 18:28:55.210432 kubelet[2838]: E0126 18:28:55.209916 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zw8p" podUID="fb70354b-2e8e-4b1e-823d-0f04eedecec2" Jan 26 18:28:55.223482 kubelet[2838]: E0126 18:28:55.223401 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:55.233468 kubelet[2838]: E0126 18:28:55.233285 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:55.242457 kubelet[2838]: E0126 18:28:55.241980 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-dbcn4" podUID="bcefd4f3-4cd3-4d24-b71b-627a7a3ce855" Jan 26 18:28:55.242457 kubelet[2838]: E0126 18:28:55.242189 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gzr9m" podUID="e99188ce-3ac3-4524-8689-b68793ad3ef1" Jan 26 18:28:55.297000 audit[4992]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=4992 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:55.297000 audit[4992]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffffe215240 a2=0 a3=7ffffe21522c items=0 ppid=3000 pid=4992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:55.297000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:55.310000 audit[4992]: NETFILTER_CFG table=nat:140 family=2 entries=20 op=nft_register_rule pid=4992 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:55.310000 audit[4992]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffffe215240 a2=0 a3=7ffffe21522c items=0 ppid=3000 pid=4992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:55.310000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:55.333542 kubelet[2838]: I0126 18:28:55.333225 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2x6sp" podStartSLOduration=58.333203186 podStartE2EDuration="58.333203186s" podCreationTimestamp="2026-01-26 18:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:28:55.307088598 +0000 UTC m=+62.375056929" watchObservedRunningTime="2026-01-26 18:28:55.333203186 +0000 UTC m=+62.401171517" Jan 26 18:28:55.350000 audit[4994]: NETFILTER_CFG table=filter:141 family=2 entries=14 op=nft_register_rule pid=4994 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:55.350000 audit[4994]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe0926cdd0 a2=0 a3=7ffe0926cdbc items=0 ppid=3000 pid=4994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:55.350000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:55.375000 audit[4994]: NETFILTER_CFG table=nat:142 family=2 entries=56 op=nft_register_chain pid=4994 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:55.375000 audit[4994]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe0926cdd0 a2=0 a3=7ffe0926cdbc items=0 ppid=3000 pid=4994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:55.375000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:55.785621 systemd-networkd[1514]: calie9b9dc9c509: Gained IPv6LL Jan 26 18:28:56.041701 systemd-networkd[1514]: caliceb46e8d16c: Gained IPv6LL Jan 26 18:28:56.231234 kubelet[2838]: E0126 18:28:56.231136 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:56.231671 kubelet[2838]: E0126 18:28:56.231136 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:28:56.232704 kubelet[2838]: E0126 18:28:56.232436 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zw8p" podUID="fb70354b-2e8e-4b1e-823d-0f04eedecec2" Jan 26 18:28:56.233529 kubelet[2838]: E0126 18:28:56.233074 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-dbcn4" podUID="bcefd4f3-4cd3-4d24-b71b-627a7a3ce855" Jan 26 18:28:56.403000 audit[5003]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5003 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:56.403000 audit[5003]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff00f649e0 a2=0 a3=7fff00f649cc items=0 ppid=3000 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:56.403000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:56.460000 audit[5003]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=5003 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:28:56.460000 audit[5003]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff00f649e0 a2=0 a3=7fff00f649cc items=0 ppid=3000 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:28:56.460000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:28:57.237308 kubelet[2838]: E0126 18:28:57.236700 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:29:03.437488 kubelet[2838]: E0126 18:29:03.437445 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:29:03.439504 containerd[1602]: time="2026-01-26T18:29:03.438959022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 26 18:29:03.525474 containerd[1602]: time="2026-01-26T18:29:03.525199570Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:29:03.527998 containerd[1602]: time="2026-01-26T18:29:03.527681529Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 26 18:29:03.527998 containerd[1602]: time="2026-01-26T18:29:03.527861356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 26 18:29:03.528492 kubelet[2838]: E0126 18:29:03.528395 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 26 18:29:03.528492 kubelet[2838]: E0126 18:29:03.528440 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 26 18:29:03.528609 kubelet[2838]: E0126 18:29:03.528533 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:dc422a360d1a4f58bab37439bea74799,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pmnl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64656d67bd-dn9xb_calico-system(17a5ef19-3fa0-4382-add3-2ce06b88ea33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 26 18:29:03.531935 containerd[1602]: time="2026-01-26T18:29:03.531532604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 26 18:29:03.599499 containerd[1602]: time="2026-01-26T18:29:03.599222190Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:29:03.602419 containerd[1602]: time="2026-01-26T18:29:03.602025874Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 26 18:29:03.602419 containerd[1602]: time="2026-01-26T18:29:03.602099461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 26 18:29:03.602706 kubelet[2838]: E0126 18:29:03.602535 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 26 18:29:03.602706 kubelet[2838]: E0126 18:29:03.602575 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 26 18:29:03.603001 kubelet[2838]: E0126 18:29:03.602732 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pmnl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64656d67bd-dn9xb_calico-system(17a5ef19-3fa0-4382-add3-2ce06b88ea33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 26 18:29:03.604403 kubelet[2838]: E0126 18:29:03.603966 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64656d67bd-dn9xb" podUID="17a5ef19-3fa0-4382-add3-2ce06b88ea33" Jan 26 18:29:06.441052 containerd[1602]: time="2026-01-26T18:29:06.440681058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 26 18:29:06.552427 containerd[1602]: time="2026-01-26T18:29:06.552147001Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:29:06.554748 containerd[1602]: time="2026-01-26T18:29:06.554622241Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 26 18:29:06.555034 containerd[1602]: time="2026-01-26T18:29:06.554906170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 26 18:29:06.555219 kubelet[2838]: E0126 18:29:06.555085 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 26 18:29:06.555219 kubelet[2838]: E0126 18:29:06.555203 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 26 18:29:06.555948 kubelet[2838]: E0126 18:29:06.555409 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5cfhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gzr9m_calico-system(e99188ce-3ac3-4524-8689-b68793ad3ef1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 26 18:29:06.559032 containerd[1602]: time="2026-01-26T18:29:06.558688093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 26 18:29:06.637103 containerd[1602]: time="2026-01-26T18:29:06.636976200Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:29:06.639181 containerd[1602]: time="2026-01-26T18:29:06.639058117Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 26 18:29:06.639270 containerd[1602]: time="2026-01-26T18:29:06.639201755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 26 18:29:06.641126 kubelet[2838]: E0126 18:29:06.641080 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 26 18:29:06.641419 kubelet[2838]: E0126 18:29:06.641279 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 26 18:29:06.642514 kubelet[2838]: E0126 18:29:06.642090 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5cfhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gzr9m_calico-system(e99188ce-3ac3-4524-8689-b68793ad3ef1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 26 18:29:06.645098 kubelet[2838]: E0126 18:29:06.644513 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gzr9m" podUID="e99188ce-3ac3-4524-8689-b68793ad3ef1" Jan 26 18:29:07.445370 containerd[1602]: time="2026-01-26T18:29:07.445021323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 26 18:29:07.531554 containerd[1602]: time="2026-01-26T18:29:07.530908000Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:29:07.536474 containerd[1602]: time="2026-01-26T18:29:07.536180479Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 26 18:29:07.536474 containerd[1602]: time="2026-01-26T18:29:07.536273173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 26 18:29:07.536592 kubelet[2838]: E0126 18:29:07.536547 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:29:07.536631 kubelet[2838]: E0126 18:29:07.536604 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:29:07.537352 kubelet[2838]: E0126 18:29:07.537048 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p6pl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8ccd88f4-6fjn2_calico-apiserver(7718a3ef-224e-406c-b2ab-a63644f74c0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 26 18:29:07.542999 kubelet[2838]: E0126 18:29:07.539737 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-6fjn2" podUID="7718a3ef-224e-406c-b2ab-a63644f74c0b" Jan 26 18:29:08.438087 kubelet[2838]: E0126 18:29:08.437925 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:29:08.439072 containerd[1602]: time="2026-01-26T18:29:08.439038971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 26 18:29:08.509616 containerd[1602]: time="2026-01-26T18:29:08.509413407Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:29:08.512357 containerd[1602]: time="2026-01-26T18:29:08.512203623Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 26 18:29:08.512442 containerd[1602]: time="2026-01-26T18:29:08.512375443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 26 18:29:08.513029 kubelet[2838]: E0126 18:29:08.512653 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 26 18:29:08.513029 kubelet[2838]: E0126 18:29:08.512976 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 26 18:29:08.513376 kubelet[2838]: E0126 18:29:08.513182 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vm9gk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-56d588489d-lsq6l_calico-system(5f250e57-76e7-4282-9d3b-aa7149c84f3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 26 18:29:08.515564 kubelet[2838]: E0126 18:29:08.515451 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56d588489d-lsq6l" podUID="5f250e57-76e7-4282-9d3b-aa7149c84f3a" Jan 26 18:29:09.442372 containerd[1602]: time="2026-01-26T18:29:09.441451989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 26 18:29:09.510118 containerd[1602]: time="2026-01-26T18:29:09.509954135Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:29:09.512225 containerd[1602]: time="2026-01-26T18:29:09.512055215Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 26 18:29:09.512225 containerd[1602]: time="2026-01-26T18:29:09.512187141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 26 18:29:09.512824 kubelet[2838]: E0126 18:29:09.512707 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 26 18:29:09.513428 kubelet[2838]: E0126 18:29:09.512970 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 26 18:29:09.513428 kubelet[2838]: E0126 18:29:09.513085 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqcn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8zw8p_calico-system(fb70354b-2e8e-4b1e-823d-0f04eedecec2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 26 18:29:09.514952 kubelet[2838]: E0126 18:29:09.514723 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zw8p" podUID="fb70354b-2e8e-4b1e-823d-0f04eedecec2" Jan 26 18:29:11.467006 containerd[1602]: time="2026-01-26T18:29:11.466556269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 26 18:29:11.582689 containerd[1602]: time="2026-01-26T18:29:11.582644302Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:29:11.590181 containerd[1602]: time="2026-01-26T18:29:11.589920790Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 26 18:29:11.590181 containerd[1602]: time="2026-01-26T18:29:11.590077571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 26 18:29:11.590409 kubelet[2838]: E0126 18:29:11.590378 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:29:11.592096 kubelet[2838]: E0126 18:29:11.591943 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:29:11.592096 kubelet[2838]: E0126 18:29:11.592060 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfdrn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8ccd88f4-dbcn4_calico-apiserver(bcefd4f3-4cd3-4d24-b71b-627a7a3ce855): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 26 18:29:11.594236 kubelet[2838]: E0126 18:29:11.593512 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-dbcn4" podUID="bcefd4f3-4cd3-4d24-b71b-627a7a3ce855" Jan 26 18:29:14.439707 kubelet[2838]: E0126 18:29:14.439636 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64656d67bd-dn9xb" podUID="17a5ef19-3fa0-4382-add3-2ce06b88ea33" Jan 26 18:29:19.261110 kubelet[2838]: E0126 18:29:19.260641 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:29:19.437845 kubelet[2838]: E0126 18:29:19.437534 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:29:20.440450 kubelet[2838]: E0126 18:29:20.439926 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-6fjn2" podUID="7718a3ef-224e-406c-b2ab-a63644f74c0b" Jan 26 18:29:20.440450 kubelet[2838]: E0126 18:29:20.440260 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zw8p" podUID="fb70354b-2e8e-4b1e-823d-0f04eedecec2" Jan 26 18:29:20.443941 kubelet[2838]: E0126 18:29:20.443555 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gzr9m" podUID="e99188ce-3ac3-4524-8689-b68793ad3ef1" Jan 26 18:29:23.440165 kubelet[2838]: E0126 18:29:23.439939 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56d588489d-lsq6l" podUID="5f250e57-76e7-4282-9d3b-aa7149c84f3a" Jan 26 18:29:24.439060 kubelet[2838]: E0126 18:29:24.439021 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-dbcn4" podUID="bcefd4f3-4cd3-4d24-b71b-627a7a3ce855" Jan 26 18:29:28.436907 kubelet[2838]: E0126 18:29:28.436701 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:29:28.443047 containerd[1602]: time="2026-01-26T18:29:28.440966996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 26 18:29:28.523674 containerd[1602]: time="2026-01-26T18:29:28.523491281Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:29:28.528945 containerd[1602]: time="2026-01-26T18:29:28.528046602Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 26 18:29:28.529177 containerd[1602]: time="2026-01-26T18:29:28.528487244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 26 18:29:28.531451 kubelet[2838]: E0126 18:29:28.531172 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 26 18:29:28.531451 kubelet[2838]: E0126 18:29:28.531369 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 26 18:29:28.531588 kubelet[2838]: E0126 18:29:28.531474 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:dc422a360d1a4f58bab37439bea74799,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pmnl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64656d67bd-dn9xb_calico-system(17a5ef19-3fa0-4382-add3-2ce06b88ea33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 26 18:29:28.537489 containerd[1602]: time="2026-01-26T18:29:28.537454190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 26 18:29:28.618011 containerd[1602]: time="2026-01-26T18:29:28.617657198Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:29:28.624716 containerd[1602]: time="2026-01-26T18:29:28.622898128Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 26 18:29:28.624716 containerd[1602]: time="2026-01-26T18:29:28.622983888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 26 18:29:28.625029 kubelet[2838]: E0126 18:29:28.623921 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 26 18:29:28.625029 kubelet[2838]: E0126 18:29:28.623968 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 26 18:29:28.625029 kubelet[2838]: E0126 18:29:28.624360 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pmnl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64656d67bd-dn9xb_calico-system(17a5ef19-3fa0-4382-add3-2ce06b88ea33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 26 18:29:28.628078 kubelet[2838]: E0126 18:29:28.627554 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64656d67bd-dn9xb" podUID="17a5ef19-3fa0-4382-add3-2ce06b88ea33" Jan 26 18:29:32.439471 containerd[1602]: time="2026-01-26T18:29:32.438639333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 26 18:29:32.664965 containerd[1602]: time="2026-01-26T18:29:32.664643968Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:29:32.683667 containerd[1602]: time="2026-01-26T18:29:32.683429563Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 26 18:29:32.683667 containerd[1602]: time="2026-01-26T18:29:32.683563253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 26 18:29:32.684893 kubelet[2838]: E0126 18:29:32.684056 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 26 18:29:32.684893 kubelet[2838]: E0126 18:29:32.684113 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 26 18:29:32.684893 kubelet[2838]: E0126 18:29:32.684361 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqcn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8zw8p_calico-system(fb70354b-2e8e-4b1e-823d-0f04eedecec2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 26 18:29:32.689692 kubelet[2838]: E0126 18:29:32.689201 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zw8p" podUID="fb70354b-2e8e-4b1e-823d-0f04eedecec2" Jan 26 18:29:33.453908 containerd[1602]: time="2026-01-26T18:29:33.453576684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 26 18:29:33.536956 containerd[1602]: time="2026-01-26T18:29:33.536037340Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:29:33.540736 containerd[1602]: time="2026-01-26T18:29:33.540565055Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 26 18:29:33.541143 kubelet[2838]: E0126 18:29:33.541012 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 26 18:29:33.541216 kubelet[2838]: E0126 18:29:33.541147 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 26 18:29:33.541726 kubelet[2838]: E0126 18:29:33.541371 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5cfhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gzr9m_calico-system(e99188ce-3ac3-4524-8689-b68793ad3ef1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 26 18:29:33.542097 containerd[1602]: time="2026-01-26T18:29:33.540643779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 26 18:29:33.546468 containerd[1602]: time="2026-01-26T18:29:33.546024600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 26 18:29:33.638611 containerd[1602]: time="2026-01-26T18:29:33.638453481Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:29:33.642664 containerd[1602]: time="2026-01-26T18:29:33.642001434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 26 18:29:33.642664 containerd[1602]: time="2026-01-26T18:29:33.642137648Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 26 18:29:33.644609 kubelet[2838]: E0126 18:29:33.644491 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 26 18:29:33.644609 kubelet[2838]: E0126 18:29:33.644542 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 26 18:29:33.645500 kubelet[2838]: E0126 18:29:33.644643 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5cfhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gzr9m_calico-system(e99188ce-3ac3-4524-8689-b68793ad3ef1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 26 18:29:33.647156 kubelet[2838]: E0126 18:29:33.647033 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gzr9m" podUID="e99188ce-3ac3-4524-8689-b68793ad3ef1" Jan 26 18:29:35.449537 containerd[1602]: time="2026-01-26T18:29:35.449085279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 26 18:29:35.568936 containerd[1602]: time="2026-01-26T18:29:35.567909819Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:29:35.575173 containerd[1602]: time="2026-01-26T18:29:35.574544428Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 26 18:29:35.575173 containerd[1602]: time="2026-01-26T18:29:35.574718173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 26 18:29:35.575573 kubelet[2838]: E0126 18:29:35.575438 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:29:35.575573 kubelet[2838]: E0126 18:29:35.575482 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:29:35.577491 kubelet[2838]: E0126 18:29:35.575726 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p6pl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8ccd88f4-6fjn2_calico-apiserver(7718a3ef-224e-406c-b2ab-a63644f74c0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 26 18:29:35.580010 kubelet[2838]: E0126 18:29:35.579243 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-6fjn2" podUID="7718a3ef-224e-406c-b2ab-a63644f74c0b" Jan 26 18:29:36.446905 containerd[1602]: time="2026-01-26T18:29:36.446573462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 26 18:29:36.533043 containerd[1602]: time="2026-01-26T18:29:36.532079390Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:29:36.538103 containerd[1602]: time="2026-01-26T18:29:36.537520096Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 26 18:29:36.538103 containerd[1602]: time="2026-01-26T18:29:36.537718758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 26 18:29:36.539161 kubelet[2838]: E0126 18:29:36.538638 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:29:36.539717 kubelet[2838]: E0126 18:29:36.539536 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:29:36.540044 kubelet[2838]: E0126 18:29:36.539914 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfdrn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8ccd88f4-dbcn4_calico-apiserver(bcefd4f3-4cd3-4d24-b71b-627a7a3ce855): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 26 18:29:36.541723 kubelet[2838]: E0126 18:29:36.541556 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-dbcn4" podUID="bcefd4f3-4cd3-4d24-b71b-627a7a3ce855" Jan 26 18:29:37.453132 containerd[1602]: time="2026-01-26T18:29:37.452681459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 26 18:29:37.557969 containerd[1602]: time="2026-01-26T18:29:37.557143185Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:29:37.560119 containerd[1602]: time="2026-01-26T18:29:37.559663644Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 26 18:29:37.560119 containerd[1602]: time="2026-01-26T18:29:37.560023405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 26 18:29:37.561000 kubelet[2838]: E0126 18:29:37.560630 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 26 18:29:37.561000 kubelet[2838]: E0126 18:29:37.560705 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 26 18:29:37.561383 kubelet[2838]: E0126 18:29:37.561018 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vm9gk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-56d588489d-lsq6l_calico-system(5f250e57-76e7-4282-9d3b-aa7149c84f3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 26 18:29:37.562660 kubelet[2838]: E0126 18:29:37.562096 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56d588489d-lsq6l" podUID="5f250e57-76e7-4282-9d3b-aa7149c84f3a" Jan 26 18:29:42.442559 kubelet[2838]: E0126 18:29:42.442499 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64656d67bd-dn9xb" podUID="17a5ef19-3fa0-4382-add3-2ce06b88ea33" Jan 26 18:29:44.443909 kubelet[2838]: E0126 18:29:44.443470 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gzr9m" podUID="e99188ce-3ac3-4524-8689-b68793ad3ef1" Jan 26 18:29:46.438190 kubelet[2838]: E0126 18:29:46.437350 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:29:47.446490 kubelet[2838]: E0126 18:29:47.446436 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zw8p" podUID="fb70354b-2e8e-4b1e-823d-0f04eedecec2" Jan 26 18:29:49.452678 kubelet[2838]: E0126 18:29:49.452636 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56d588489d-lsq6l" podUID="5f250e57-76e7-4282-9d3b-aa7149c84f3a" Jan 26 18:29:50.440509 kubelet[2838]: E0126 18:29:50.440014 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-dbcn4" podUID="bcefd4f3-4cd3-4d24-b71b-627a7a3ce855" Jan 26 18:29:50.440509 kubelet[2838]: E0126 18:29:50.440099 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-6fjn2" podUID="7718a3ef-224e-406c-b2ab-a63644f74c0b" Jan 26 18:29:51.147163 systemd[1]: Started sshd@9-10.0.0.106:22-10.0.0.1:52516.service - OpenSSH per-connection server daemon (10.0.0.1:52516). Jan 26 18:29:51.167049 kernel: kauditd_printk_skb: 214 callbacks suppressed Jan 26 18:29:51.167362 kernel: audit: type=1130 audit(1769452191.145:739): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.106:22-10.0.0.1:52516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:29:51.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.106:22-10.0.0.1:52516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:29:51.535000 audit[5095]: USER_ACCT pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:29:51.554132 sshd[5095]: Accepted publickey for core from 10.0.0.1 port 52516 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:29:51.570203 kernel: audit: type=1101 audit(1769452191.535:740): pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:29:51.571000 audit[5095]: CRED_ACQ pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:29:51.575084 sshd-session[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:29:51.615612 systemd-logind[1580]: New session 11 of user core. Jan 26 18:29:51.626018 kernel: audit: type=1103 audit(1769452191.571:741): pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:29:51.626078 kernel: audit: type=1006 audit(1769452191.571:742): pid=5095 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 26 18:29:51.571000 audit[5095]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff68bc8400 a2=3 a3=0 items=0 ppid=1 pid=5095 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:29:51.666248 kernel: audit: type=1300 audit(1769452191.571:742): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff68bc8400 a2=3 a3=0 items=0 ppid=1 pid=5095 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:29:51.571000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:29:51.681032 kernel: audit: type=1327 audit(1769452191.571:742): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:29:51.681497 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 26 18:29:51.697000 audit[5095]: USER_START pid=5095 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:29:51.738004 kernel: audit: type=1105 audit(1769452191.697:743): pid=5095 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:29:51.709000 audit[5099]: CRED_ACQ pid=5099 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:29:51.770679 kernel: audit: type=1103 audit(1769452191.709:744): pid=5099 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:29:52.130246 sshd[5099]: Connection closed by 10.0.0.1 port 52516 Jan 26 18:29:52.132186 sshd-session[5095]: pam_unix(sshd:session): session closed for user core Jan 26 18:29:52.134000 audit[5095]: USER_END pid=5095 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:29:52.166990 systemd[1]: sshd@9-10.0.0.106:22-10.0.0.1:52516.service: Deactivated successfully. Jan 26 18:29:52.180932 kernel: audit: type=1106 audit(1769452192.134:745): pid=5095 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:29:52.181037 kernel: audit: type=1104 audit(1769452192.154:746): pid=5095 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:29:52.154000 audit[5095]: CRED_DISP pid=5095 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:29:52.184488 systemd[1]: session-11.scope: Deactivated successfully. Jan 26 18:29:52.193211 systemd-logind[1580]: Session 11 logged out. Waiting for processes to exit. Jan 26 18:29:52.198550 systemd-logind[1580]: Removed session 11. Jan 26 18:29:52.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.106:22-10.0.0.1:52516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:29:53.459023 kubelet[2838]: E0126 18:29:53.458583 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64656d67bd-dn9xb" podUID="17a5ef19-3fa0-4382-add3-2ce06b88ea33" Jan 26 18:29:57.149146 systemd[1]: Started sshd@10-10.0.0.106:22-10.0.0.1:40866.service - OpenSSH per-connection server daemon (10.0.0.1:40866). Jan 26 18:29:57.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.106:22-10.0.0.1:40866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:29:57.184095 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 26 18:29:57.184222 kernel: audit: type=1130 audit(1769452197.147:748): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.106:22-10.0.0.1:40866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:29:57.297000 audit[5119]: USER_ACCT pid=5119 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:29:57.301711 sshd[5119]: Accepted publickey for core from 10.0.0.1 port 40866 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:29:57.304209 sshd-session[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:29:57.321103 systemd-logind[1580]: New session 12 of user core. Jan 26 18:29:57.300000 audit[5119]: CRED_ACQ pid=5119 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:29:57.362851 kernel: audit: type=1101 audit(1769452197.297:749): pid=5119 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:29:57.362953 kernel: audit: type=1103 audit(1769452197.300:750): pid=5119 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:29:57.365006 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 26 18:29:57.385999 kernel: audit: type=1006 audit(1769452197.300:751): pid=5119 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 26 18:29:57.300000 audit[5119]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd76f22360 a2=3 a3=0 items=0 ppid=1 pid=5119 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:29:57.425364 kernel: audit: type=1300 audit(1769452197.300:751): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd76f22360 a2=3 a3=0 items=0 ppid=1 pid=5119 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:29:57.300000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:29:57.442002 kernel: audit: type=1327 audit(1769452197.300:751): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:29:57.378000 audit[5119]: USER_START pid=5119 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:29:57.384000 audit[5123]: CRED_ACQ pid=5123 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:29:57.515569 kernel: audit: type=1105 audit(1769452197.378:752): pid=5119 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:29:57.515696 kernel: audit: type=1103 audit(1769452197.384:753): pid=5123 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:29:57.663143 sshd[5123]: Connection closed by 10.0.0.1 port 40866 Jan 26 18:29:57.664066 sshd-session[5119]: pam_unix(sshd:session): session closed for user core Jan 26 18:29:57.673000 audit[5119]: USER_END pid=5119 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:29:57.683429 systemd[1]: sshd@10-10.0.0.106:22-10.0.0.1:40866.service: Deactivated successfully. Jan 26 18:29:57.687131 systemd[1]: session-12.scope: Deactivated successfully. Jan 26 18:29:57.689220 systemd-logind[1580]: Session 12 logged out. Waiting for processes to exit. Jan 26 18:29:57.692064 systemd-logind[1580]: Removed session 12. Jan 26 18:29:57.722483 kernel: audit: type=1106 audit(1769452197.673:754): pid=5119 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:29:57.673000 audit[5119]: CRED_DISP pid=5119 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:29:57.755012 kernel: audit: type=1104 audit(1769452197.673:755): pid=5119 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:29:57.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.106:22-10.0.0.1:40866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:29:59.461561 kubelet[2838]: E0126 18:29:59.461227 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gzr9m" podUID="e99188ce-3ac3-4524-8689-b68793ad3ef1" Jan 26 18:30:01.443605 kubelet[2838]: E0126 18:30:01.440243 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zw8p" podUID="fb70354b-2e8e-4b1e-823d-0f04eedecec2" Jan 26 18:30:02.449005 kubelet[2838]: E0126 18:30:02.447999 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-6fjn2" podUID="7718a3ef-224e-406c-b2ab-a63644f74c0b" Jan 26 18:30:02.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.106:22-10.0.0.1:46764 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:02.687230 systemd[1]: Started sshd@11-10.0.0.106:22-10.0.0.1:46764.service - OpenSSH per-connection server daemon (10.0.0.1:46764). Jan 26 18:30:02.696893 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 26 18:30:02.696956 kernel: audit: type=1130 audit(1769452202.685:757): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.106:22-10.0.0.1:46764 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:02.839000 audit[5140]: USER_ACCT pid=5140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:02.845214 sshd[5140]: Accepted publickey for core from 10.0.0.1 port 46764 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:30:02.849935 sshd-session[5140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:30:02.860109 systemd-logind[1580]: New session 13 of user core. Jan 26 18:30:02.845000 audit[5140]: CRED_ACQ pid=5140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:02.909178 kernel: audit: type=1101 audit(1769452202.839:758): pid=5140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:02.909254 kernel: audit: type=1103 audit(1769452202.845:759): pid=5140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:02.930264 kernel: audit: type=1006 audit(1769452202.845:760): pid=5140 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 26 18:30:02.930442 kernel: audit: type=1300 audit(1769452202.845:760): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc320d1620 a2=3 a3=0 items=0 ppid=1 pid=5140 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:30:02.845000 audit[5140]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc320d1620 a2=3 a3=0 items=0 ppid=1 pid=5140 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:30:02.845000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:30:02.986368 kernel: audit: type=1327 audit(1769452202.845:760): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:30:02.986449 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 26 18:30:02.996000 audit[5140]: USER_START pid=5140 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:03.068979 kernel: audit: type=1105 audit(1769452202.996:761): pid=5140 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:03.069095 kernel: audit: type=1103 audit(1769452202.998:762): pid=5144 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:02.998000 audit[5144]: CRED_ACQ pid=5144 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:03.300485 sshd[5144]: Connection closed by 10.0.0.1 port 46764 Jan 26 18:30:03.301158 sshd-session[5140]: pam_unix(sshd:session): session closed for user core Jan 26 18:30:03.303000 audit[5140]: USER_END pid=5140 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:03.312160 systemd[1]: sshd@11-10.0.0.106:22-10.0.0.1:46764.service: Deactivated successfully. Jan 26 18:30:03.321170 systemd[1]: session-13.scope: Deactivated successfully. Jan 26 18:30:03.329096 systemd-logind[1580]: Session 13 logged out. Waiting for processes to exit. Jan 26 18:30:03.332244 systemd-logind[1580]: Removed session 13. Jan 26 18:30:03.355599 kernel: audit: type=1106 audit(1769452203.303:763): pid=5140 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:03.355712 kernel: audit: type=1104 audit(1769452203.303:764): pid=5140 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:03.303000 audit[5140]: CRED_DISP pid=5140 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:03.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.106:22-10.0.0.1:46764 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:04.440134 kubelet[2838]: E0126 18:30:04.439648 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56d588489d-lsq6l" podUID="5f250e57-76e7-4282-9d3b-aa7149c84f3a" Jan 26 18:30:04.443019 kubelet[2838]: E0126 18:30:04.440260 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-dbcn4" podUID="bcefd4f3-4cd3-4d24-b71b-627a7a3ce855" Jan 26 18:30:05.440553 kubelet[2838]: E0126 18:30:05.440373 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64656d67bd-dn9xb" podUID="17a5ef19-3fa0-4382-add3-2ce06b88ea33" Jan 26 18:30:08.322432 systemd[1]: Started sshd@12-10.0.0.106:22-10.0.0.1:46780.service - OpenSSH per-connection server daemon (10.0.0.1:46780). Jan 26 18:30:08.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.106:22-10.0.0.1:46780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:08.333946 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 26 18:30:08.334046 kernel: audit: type=1130 audit(1769452208.320:766): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.106:22-10.0.0.1:46780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:08.498000 audit[5158]: USER_ACCT pid=5158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:08.506494 sshd-session[5158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:30:08.510669 sshd[5158]: Accepted publickey for core from 10.0.0.1 port 46780 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:30:08.516162 systemd-logind[1580]: New session 14 of user core. Jan 26 18:30:08.502000 audit[5158]: CRED_ACQ pid=5158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:08.568441 kernel: audit: type=1101 audit(1769452208.498:767): pid=5158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:08.568562 kernel: audit: type=1103 audit(1769452208.502:768): pid=5158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:08.592962 kernel: audit: type=1006 audit(1769452208.502:769): pid=5158 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 26 18:30:08.593093 kernel: audit: type=1300 audit(1769452208.502:769): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc98beecb0 a2=3 a3=0 items=0 ppid=1 pid=5158 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:30:08.502000 audit[5158]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc98beecb0 a2=3 a3=0 items=0 ppid=1 pid=5158 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:30:08.631427 kernel: audit: type=1327 audit(1769452208.502:769): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:30:08.502000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:30:08.632253 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 26 18:30:08.642000 audit[5158]: USER_START pid=5158 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:08.691245 kernel: audit: type=1105 audit(1769452208.642:770): pid=5158 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:08.647000 audit[5162]: CRED_ACQ pid=5162 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:08.724941 kernel: audit: type=1103 audit(1769452208.647:771): pid=5162 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:08.881254 sshd[5162]: Connection closed by 10.0.0.1 port 46780 Jan 26 18:30:08.884032 sshd-session[5158]: pam_unix(sshd:session): session closed for user core Jan 26 18:30:08.886000 audit[5158]: USER_END pid=5158 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:08.894041 systemd-logind[1580]: Session 14 logged out. Waiting for processes to exit. Jan 26 18:30:08.894099 systemd[1]: sshd@12-10.0.0.106:22-10.0.0.1:46780.service: Deactivated successfully. Jan 26 18:30:08.898259 systemd[1]: session-14.scope: Deactivated successfully. Jan 26 18:30:08.905250 systemd-logind[1580]: Removed session 14. Jan 26 18:30:08.938155 kernel: audit: type=1106 audit(1769452208.886:772): pid=5158 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:08.939137 kernel: audit: type=1104 audit(1769452208.886:773): pid=5158 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:08.886000 audit[5158]: CRED_DISP pid=5158 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:08.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.106:22-10.0.0.1:46780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:11.443117 kubelet[2838]: E0126 18:30:11.443076 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gzr9m" podUID="e99188ce-3ac3-4524-8689-b68793ad3ef1" Jan 26 18:30:13.909252 systemd[1]: Started sshd@13-10.0.0.106:22-10.0.0.1:45328.service - OpenSSH per-connection server daemon (10.0.0.1:45328). Jan 26 18:30:13.951545 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 26 18:30:13.951594 kernel: audit: type=1130 audit(1769452213.908:775): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.106:22-10.0.0.1:45328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:13.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.106:22-10.0.0.1:45328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:14.076000 audit[5182]: USER_ACCT pid=5182 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:14.085538 sshd[5182]: Accepted publickey for core from 10.0.0.1 port 45328 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:30:14.086662 sshd-session[5182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:30:14.099974 systemd-logind[1580]: New session 15 of user core. Jan 26 18:30:14.117245 kernel: audit: type=1101 audit(1769452214.076:776): pid=5182 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:14.117462 kernel: audit: type=1103 audit(1769452214.082:777): pid=5182 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:14.082000 audit[5182]: CRED_ACQ pid=5182 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:14.184111 kernel: audit: type=1006 audit(1769452214.083:778): pid=5182 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 26 18:30:14.184493 kernel: audit: type=1300 audit(1769452214.083:778): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5ef79060 a2=3 a3=0 items=0 ppid=1 pid=5182 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:30:14.083000 audit[5182]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5ef79060 a2=3 a3=0 items=0 ppid=1 pid=5182 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:30:14.231216 kernel: audit: type=1327 audit(1769452214.083:778): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:30:14.083000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:30:14.251943 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 26 18:30:14.258000 audit[5182]: USER_START pid=5182 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:14.262000 audit[5186]: CRED_ACQ pid=5186 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:14.345983 kernel: audit: type=1105 audit(1769452214.258:779): pid=5182 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:14.346079 kernel: audit: type=1103 audit(1769452214.262:780): pid=5186 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:14.443190 containerd[1602]: time="2026-01-26T18:30:14.442398466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 26 18:30:14.576223 containerd[1602]: time="2026-01-26T18:30:14.576070748Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:30:14.583443 sshd[5186]: Connection closed by 10.0.0.1 port 45328 Jan 26 18:30:14.585092 sshd-session[5182]: pam_unix(sshd:session): session closed for user core Jan 26 18:30:14.587197 containerd[1602]: time="2026-01-26T18:30:14.585563296Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 26 18:30:14.588182 containerd[1602]: time="2026-01-26T18:30:14.588056707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 26 18:30:14.589516 kubelet[2838]: E0126 18:30:14.589124 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 26 18:30:14.589516 kubelet[2838]: E0126 18:30:14.589270 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 26 18:30:14.591583 kubelet[2838]: E0126 18:30:14.589535 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqcn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8zw8p_calico-system(fb70354b-2e8e-4b1e-823d-0f04eedecec2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 26 18:30:14.589000 audit[5182]: USER_END pid=5182 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:14.596070 kubelet[2838]: E0126 18:30:14.591605 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zw8p" podUID="fb70354b-2e8e-4b1e-823d-0f04eedecec2" Jan 26 18:30:14.599583 systemd[1]: sshd@13-10.0.0.106:22-10.0.0.1:45328.service: Deactivated successfully. Jan 26 18:30:14.606190 systemd[1]: session-15.scope: Deactivated successfully. Jan 26 18:30:14.617675 systemd-logind[1580]: Session 15 logged out. Waiting for processes to exit. Jan 26 18:30:14.621470 systemd-logind[1580]: Removed session 15. Jan 26 18:30:14.590000 audit[5182]: CRED_DISP pid=5182 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:14.684071 kernel: audit: type=1106 audit(1769452214.589:781): pid=5182 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:14.684426 kernel: audit: type=1104 audit(1769452214.590:782): pid=5182 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:14.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.106:22-10.0.0.1:45328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:16.442842 kubelet[2838]: E0126 18:30:16.442570 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-dbcn4" podUID="bcefd4f3-4cd3-4d24-b71b-627a7a3ce855" Jan 26 18:30:16.445600 containerd[1602]: time="2026-01-26T18:30:16.445124079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 26 18:30:16.528541 containerd[1602]: time="2026-01-26T18:30:16.528213268Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:30:16.537691 containerd[1602]: time="2026-01-26T18:30:16.536994408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 26 18:30:16.537691 containerd[1602]: time="2026-01-26T18:30:16.537158665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 26 18:30:16.542008 kubelet[2838]: E0126 18:30:16.538139 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 26 18:30:16.542008 kubelet[2838]: E0126 18:30:16.538193 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 26 18:30:16.542008 kubelet[2838]: E0126 18:30:16.538417 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:dc422a360d1a4f58bab37439bea74799,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pmnl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64656d67bd-dn9xb_calico-system(17a5ef19-3fa0-4382-add3-2ce06b88ea33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 26 18:30:16.544571 containerd[1602]: time="2026-01-26T18:30:16.543993986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 26 18:30:16.639262 containerd[1602]: time="2026-01-26T18:30:16.639164624Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:30:16.646084 containerd[1602]: time="2026-01-26T18:30:16.645710174Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 26 18:30:16.646505 containerd[1602]: time="2026-01-26T18:30:16.645973701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 26 18:30:16.648582 kubelet[2838]: E0126 18:30:16.647933 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 26 18:30:16.648582 kubelet[2838]: E0126 18:30:16.648087 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 26 18:30:16.648582 kubelet[2838]: E0126 18:30:16.648205 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pmnl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64656d67bd-dn9xb_calico-system(17a5ef19-3fa0-4382-add3-2ce06b88ea33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 26 18:30:16.650168 kubelet[2838]: E0126 18:30:16.650060 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64656d67bd-dn9xb" podUID="17a5ef19-3fa0-4382-add3-2ce06b88ea33" Jan 26 18:30:17.454917 containerd[1602]: time="2026-01-26T18:30:17.454663512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 26 18:30:17.534926 containerd[1602]: time="2026-01-26T18:30:17.534672531Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:30:17.542999 containerd[1602]: time="2026-01-26T18:30:17.540731184Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 26 18:30:17.542999 containerd[1602]: time="2026-01-26T18:30:17.542403413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 26 18:30:17.548623 kubelet[2838]: E0126 18:30:17.548024 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:30:17.548623 kubelet[2838]: E0126 18:30:17.548185 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:30:17.548623 kubelet[2838]: E0126 18:30:17.548452 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p6pl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8ccd88f4-6fjn2_calico-apiserver(7718a3ef-224e-406c-b2ab-a63644f74c0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 26 18:30:17.551567 kubelet[2838]: E0126 18:30:17.551184 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-6fjn2" podUID="7718a3ef-224e-406c-b2ab-a63644f74c0b" Jan 26 18:30:19.458687 kubelet[2838]: E0126 18:30:19.458653 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:30:19.466585 containerd[1602]: time="2026-01-26T18:30:19.465687409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 26 18:30:19.575075 containerd[1602]: time="2026-01-26T18:30:19.575027902Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:30:19.585412 containerd[1602]: time="2026-01-26T18:30:19.584109618Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 26 18:30:19.585412 containerd[1602]: time="2026-01-26T18:30:19.585116226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 26 18:30:19.588085 kubelet[2838]: E0126 18:30:19.587479 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 26 18:30:19.588085 kubelet[2838]: E0126 18:30:19.587517 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 26 18:30:19.588085 kubelet[2838]: E0126 18:30:19.587615 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vm9gk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-56d588489d-lsq6l_calico-system(5f250e57-76e7-4282-9d3b-aa7149c84f3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 26 18:30:19.589540 kubelet[2838]: E0126 18:30:19.589514 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56d588489d-lsq6l" podUID="5f250e57-76e7-4282-9d3b-aa7149c84f3a" Jan 26 18:30:19.599217 systemd[1]: Started sshd@14-10.0.0.106:22-10.0.0.1:45332.service - OpenSSH per-connection server daemon (10.0.0.1:45332). Jan 26 18:30:19.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.106:22-10.0.0.1:45332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:19.608963 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 26 18:30:19.609048 kernel: audit: type=1130 audit(1769452219.598:784): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.106:22-10.0.0.1:45332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:19.790000 audit[5228]: USER_ACCT pid=5228 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:19.793413 sshd[5228]: Accepted publickey for core from 10.0.0.1 port 45332 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:30:19.796711 sshd-session[5228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:30:19.829628 systemd-logind[1580]: New session 16 of user core. Jan 26 18:30:19.839959 kernel: audit: type=1101 audit(1769452219.790:785): pid=5228 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:19.792000 audit[5228]: CRED_ACQ pid=5228 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:19.792000 audit[5228]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd069096c0 a2=3 a3=0 items=0 ppid=1 pid=5228 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:30:19.882207 kernel: audit: type=1103 audit(1769452219.792:786): pid=5228 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:19.882443 kernel: audit: type=1006 audit(1769452219.792:787): pid=5228 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 26 18:30:19.882485 kernel: audit: type=1300 audit(1769452219.792:787): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd069096c0 a2=3 a3=0 items=0 ppid=1 pid=5228 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:30:19.792000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:30:19.962567 kernel: audit: type=1327 audit(1769452219.792:787): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:30:19.968722 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 26 18:30:19.983000 audit[5228]: USER_START pid=5228 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:19.987940 kernel: audit: type=1105 audit(1769452219.983:788): pid=5228 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:19.992000 audit[5232]: CRED_ACQ pid=5232 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:20.068412 kernel: audit: type=1103 audit(1769452219.992:789): pid=5232 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:20.338085 sshd[5232]: Connection closed by 10.0.0.1 port 45332 Jan 26 18:30:20.339101 sshd-session[5228]: pam_unix(sshd:session): session closed for user core Jan 26 18:30:20.341000 audit[5228]: USER_END pid=5228 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:20.350122 systemd[1]: sshd@14-10.0.0.106:22-10.0.0.1:45332.service: Deactivated successfully. Jan 26 18:30:20.351947 systemd-logind[1580]: Session 16 logged out. Waiting for processes to exit. Jan 26 18:30:20.355088 systemd[1]: session-16.scope: Deactivated successfully. Jan 26 18:30:20.360590 systemd-logind[1580]: Removed session 16. Jan 26 18:30:20.341000 audit[5228]: CRED_DISP pid=5228 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:20.418052 kernel: audit: type=1106 audit(1769452220.341:790): pid=5228 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:20.418178 kernel: audit: type=1104 audit(1769452220.341:791): pid=5228 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:20.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.106:22-10.0.0.1:45332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:20.845686 kernel: hrtimer: interrupt took 4282389 ns Jan 26 18:30:24.440172 kubelet[2838]: E0126 18:30:24.439957 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:30:24.446698 containerd[1602]: time="2026-01-26T18:30:24.445556649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 26 18:30:24.520619 containerd[1602]: time="2026-01-26T18:30:24.520208544Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:30:24.526018 containerd[1602]: time="2026-01-26T18:30:24.525463067Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 26 18:30:24.526018 containerd[1602]: time="2026-01-26T18:30:24.525566149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 26 18:30:24.526590 kubelet[2838]: E0126 18:30:24.526437 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 26 18:30:24.529190 kubelet[2838]: E0126 18:30:24.528518 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 26 18:30:24.529190 kubelet[2838]: E0126 18:30:24.529125 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5cfhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gzr9m_calico-system(e99188ce-3ac3-4524-8689-b68793ad3ef1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 26 18:30:24.536188 containerd[1602]: time="2026-01-26T18:30:24.536015446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 26 18:30:24.616274 containerd[1602]: time="2026-01-26T18:30:24.612717022Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:30:24.623736 containerd[1602]: time="2026-01-26T18:30:24.623539873Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 26 18:30:24.623736 containerd[1602]: time="2026-01-26T18:30:24.623635511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 26 18:30:24.631080 kubelet[2838]: E0126 18:30:24.629933 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 26 18:30:24.631080 kubelet[2838]: E0126 18:30:24.630008 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 26 18:30:24.631080 kubelet[2838]: E0126 18:30:24.630455 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5cfhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gzr9m_calico-system(e99188ce-3ac3-4524-8689-b68793ad3ef1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 26 18:30:24.634061 kubelet[2838]: E0126 18:30:24.633989 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gzr9m" podUID="e99188ce-3ac3-4524-8689-b68793ad3ef1" Jan 26 18:30:25.359087 systemd[1]: Started sshd@15-10.0.0.106:22-10.0.0.1:42678.service - OpenSSH per-connection server daemon (10.0.0.1:42678). Jan 26 18:30:25.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.106:22-10.0.0.1:42678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:25.368077 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 26 18:30:25.368141 kernel: audit: type=1130 audit(1769452225.357:793): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.106:22-10.0.0.1:42678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:25.440256 kubelet[2838]: E0126 18:30:25.440044 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zw8p" podUID="fb70354b-2e8e-4b1e-823d-0f04eedecec2" Jan 26 18:30:25.524000 audit[5246]: USER_ACCT pid=5246 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:25.526281 sshd[5246]: Accepted publickey for core from 10.0.0.1 port 42678 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:30:25.531964 sshd-session[5246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:30:25.546625 systemd-logind[1580]: New session 17 of user core. Jan 26 18:30:25.526000 audit[5246]: CRED_ACQ pid=5246 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:25.602144 kernel: audit: type=1101 audit(1769452225.524:794): pid=5246 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:25.602261 kernel: audit: type=1103 audit(1769452225.526:795): pid=5246 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:25.602416 kernel: audit: type=1006 audit(1769452225.526:796): pid=5246 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 26 18:30:25.526000 audit[5246]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0613c730 a2=3 a3=0 items=0 ppid=1 pid=5246 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:30:25.637081 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 26 18:30:25.668152 kernel: audit: type=1300 audit(1769452225.526:796): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0613c730 a2=3 a3=0 items=0 ppid=1 pid=5246 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:30:25.526000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:30:25.687172 kernel: audit: type=1327 audit(1769452225.526:796): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:30:25.648000 audit[5246]: USER_START pid=5246 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:25.684000 audit[5250]: CRED_ACQ pid=5250 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:25.771077 kernel: audit: type=1105 audit(1769452225.648:797): pid=5246 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:25.771192 kernel: audit: type=1103 audit(1769452225.684:798): pid=5250 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:26.021550 sshd[5250]: Connection closed by 10.0.0.1 port 42678 Jan 26 18:30:26.020625 sshd-session[5246]: pam_unix(sshd:session): session closed for user core Jan 26 18:30:26.024000 audit[5246]: USER_END pid=5246 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:26.032145 systemd[1]: sshd@15-10.0.0.106:22-10.0.0.1:42678.service: Deactivated successfully. Jan 26 18:30:26.042478 systemd[1]: session-17.scope: Deactivated successfully. Jan 26 18:30:26.047277 systemd-logind[1580]: Session 17 logged out. Waiting for processes to exit. Jan 26 18:30:26.052192 systemd-logind[1580]: Removed session 17. Jan 26 18:30:26.080053 kernel: audit: type=1106 audit(1769452226.024:799): pid=5246 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:26.025000 audit[5246]: CRED_DISP pid=5246 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:26.116185 kernel: audit: type=1104 audit(1769452226.025:800): pid=5246 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:26.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.106:22-10.0.0.1:42678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:26.438543 kubelet[2838]: E0126 18:30:26.437737 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:30:29.438540 kubelet[2838]: E0126 18:30:29.438072 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:30:30.449181 kubelet[2838]: E0126 18:30:30.449129 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-6fjn2" podUID="7718a3ef-224e-406c-b2ab-a63644f74c0b" Jan 26 18:30:31.045561 systemd[1]: Started sshd@16-10.0.0.106:22-10.0.0.1:42692.service - OpenSSH per-connection server daemon (10.0.0.1:42692). Jan 26 18:30:31.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.106:22-10.0.0.1:42692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:31.053579 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 26 18:30:31.053638 kernel: audit: type=1130 audit(1769452231.043:802): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.106:22-10.0.0.1:42692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:31.182000 audit[5286]: USER_ACCT pid=5286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:31.189438 sshd[5286]: Accepted publickey for core from 10.0.0.1 port 42692 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:30:31.200072 sshd-session[5286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:30:31.210120 systemd-logind[1580]: New session 18 of user core. Jan 26 18:30:31.195000 audit[5286]: CRED_ACQ pid=5286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:31.222017 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 26 18:30:31.259192 kernel: audit: type=1101 audit(1769452231.182:803): pid=5286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:31.259282 kernel: audit: type=1103 audit(1769452231.195:804): pid=5286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:31.195000 audit[5286]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc8cfaf80 a2=3 a3=0 items=0 ppid=1 pid=5286 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:30:31.328085 kernel: audit: type=1006 audit(1769452231.195:805): pid=5286 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 26 18:30:31.328212 kernel: audit: type=1300 audit(1769452231.195:805): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc8cfaf80 a2=3 a3=0 items=0 ppid=1 pid=5286 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:30:31.195000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:30:31.345561 kernel: audit: type=1327 audit(1769452231.195:805): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:30:31.230000 audit[5286]: USER_START pid=5286 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:31.394222 kernel: audit: type=1105 audit(1769452231.230:806): pid=5286 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:31.235000 audit[5290]: CRED_ACQ pid=5290 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:31.434409 kernel: audit: type=1103 audit(1769452231.235:807): pid=5290 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:31.460099 containerd[1602]: time="2026-01-26T18:30:31.457481571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 26 18:30:31.460640 kubelet[2838]: E0126 18:30:31.459558 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64656d67bd-dn9xb" podUID="17a5ef19-3fa0-4382-add3-2ce06b88ea33" Jan 26 18:30:31.517963 sshd[5290]: Connection closed by 10.0.0.1 port 42692 Jan 26 18:30:31.520276 sshd-session[5286]: pam_unix(sshd:session): session closed for user core Jan 26 18:30:31.524000 audit[5286]: USER_END pid=5286 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:31.537185 systemd[1]: sshd@16-10.0.0.106:22-10.0.0.1:42692.service: Deactivated successfully. Jan 26 18:30:31.544519 systemd[1]: session-18.scope: Deactivated successfully. Jan 26 18:30:31.551043 systemd-logind[1580]: Session 18 logged out. Waiting for processes to exit. Jan 26 18:30:31.553594 systemd-logind[1580]: Removed session 18. Jan 26 18:30:31.562995 containerd[1602]: time="2026-01-26T18:30:31.562962032Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:30:31.567719 containerd[1602]: time="2026-01-26T18:30:31.567619997Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 26 18:30:31.567719 containerd[1602]: time="2026-01-26T18:30:31.567693113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 26 18:30:31.568621 kubelet[2838]: E0126 18:30:31.568471 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:30:31.568621 kubelet[2838]: E0126 18:30:31.568616 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:30:31.569028 kubelet[2838]: E0126 18:30:31.568730 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfdrn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8ccd88f4-dbcn4_calico-apiserver(bcefd4f3-4cd3-4d24-b71b-627a7a3ce855): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 26 18:30:31.571073 kubelet[2838]: E0126 18:30:31.570721 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-dbcn4" podUID="bcefd4f3-4cd3-4d24-b71b-627a7a3ce855" Jan 26 18:30:31.576588 kernel: audit: type=1106 audit(1769452231.524:808): pid=5286 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:31.525000 audit[5286]: CRED_DISP pid=5286 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:31.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.106:22-10.0.0.1:42692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:31.619112 kernel: audit: type=1104 audit(1769452231.525:809): pid=5286 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:32.438576 kubelet[2838]: E0126 18:30:32.438222 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:30:33.447685 kubelet[2838]: E0126 18:30:33.447466 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56d588489d-lsq6l" podUID="5f250e57-76e7-4282-9d3b-aa7149c84f3a" Jan 26 18:30:36.441560 kubelet[2838]: E0126 18:30:36.441476 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gzr9m" podUID="e99188ce-3ac3-4524-8689-b68793ad3ef1" Jan 26 18:30:36.537148 systemd[1]: Started sshd@17-10.0.0.106:22-10.0.0.1:36340.service - OpenSSH per-connection server daemon (10.0.0.1:36340). Jan 26 18:30:36.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.106:22-10.0.0.1:36340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:36.541563 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 26 18:30:36.541623 kernel: audit: type=1130 audit(1769452236.535:811): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.106:22-10.0.0.1:36340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:36.649000 audit[5305]: USER_ACCT pid=5305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:36.651551 sshd[5305]: Accepted publickey for core from 10.0.0.1 port 36340 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:30:36.655162 sshd-session[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:30:36.666941 systemd-logind[1580]: New session 19 of user core. Jan 26 18:30:36.651000 audit[5305]: CRED_ACQ pid=5305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:36.697601 kernel: audit: type=1101 audit(1769452236.649:812): pid=5305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:36.697679 kernel: audit: type=1103 audit(1769452236.651:813): pid=5305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:36.697729 kernel: audit: type=1006 audit(1769452236.652:814): pid=5305 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 26 18:30:36.652000 audit[5305]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffedea0d20 a2=3 a3=0 items=0 ppid=1 pid=5305 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:30:36.713228 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 26 18:30:36.739903 kernel: audit: type=1300 audit(1769452236.652:814): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffedea0d20 a2=3 a3=0 items=0 ppid=1 pid=5305 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:30:36.652000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:30:36.748553 kernel: audit: type=1327 audit(1769452236.652:814): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:30:36.748630 kernel: audit: type=1105 audit(1769452236.720:815): pid=5305 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:36.720000 audit[5305]: USER_START pid=5305 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:36.722000 audit[5309]: CRED_ACQ pid=5309 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:36.796370 kernel: audit: type=1103 audit(1769452236.722:816): pid=5309 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:36.877730 sshd[5309]: Connection closed by 10.0.0.1 port 36340 Jan 26 18:30:36.880069 sshd-session[5305]: pam_unix(sshd:session): session closed for user core Jan 26 18:30:36.883000 audit[5305]: USER_END pid=5305 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:36.896958 systemd-logind[1580]: Session 19 logged out. Waiting for processes to exit. Jan 26 18:30:36.898182 systemd[1]: sshd@17-10.0.0.106:22-10.0.0.1:36340.service: Deactivated successfully. Jan 26 18:30:36.909187 kernel: audit: type=1106 audit(1769452236.883:817): pid=5305 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:36.910007 systemd[1]: session-19.scope: Deactivated successfully. Jan 26 18:30:36.884000 audit[5305]: CRED_DISP pid=5305 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:36.920640 systemd-logind[1580]: Removed session 19. Jan 26 18:30:36.940066 kernel: audit: type=1104 audit(1769452236.884:818): pid=5305 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:36.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.106:22-10.0.0.1:36340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:38.438455 kubelet[2838]: E0126 18:30:38.438105 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:30:38.440887 kubelet[2838]: E0126 18:30:38.440847 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zw8p" podUID="fb70354b-2e8e-4b1e-823d-0f04eedecec2" Jan 26 18:30:41.901046 systemd[1]: Started sshd@18-10.0.0.106:22-10.0.0.1:36344.service - OpenSSH per-connection server daemon (10.0.0.1:36344). Jan 26 18:30:41.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.106:22-10.0.0.1:36344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:41.904546 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 26 18:30:41.904577 kernel: audit: type=1130 audit(1769452241.899:820): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.106:22-10.0.0.1:36344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:41.998000 audit[5323]: USER_ACCT pid=5323 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:42.015874 kernel: audit: type=1101 audit(1769452241.998:821): pid=5323 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:42.017538 sshd[5323]: Accepted publickey for core from 10.0.0.1 port 36344 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:30:42.018580 sshd-session[5323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:30:42.015000 audit[5323]: CRED_ACQ pid=5323 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:42.039603 systemd-logind[1580]: New session 20 of user core. Jan 26 18:30:42.042063 kernel: audit: type=1103 audit(1769452242.015:822): pid=5323 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:42.042114 kernel: audit: type=1006 audit(1769452242.015:823): pid=5323 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 26 18:30:42.042131 kernel: audit: type=1300 audit(1769452242.015:823): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7748e6c0 a2=3 a3=0 items=0 ppid=1 pid=5323 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:30:42.015000 audit[5323]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7748e6c0 a2=3 a3=0 items=0 ppid=1 pid=5323 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:30:42.058272 kernel: audit: type=1327 audit(1769452242.015:823): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:30:42.015000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:30:42.067071 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 26 18:30:42.070000 audit[5323]: USER_START pid=5323 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:42.092089 kernel: audit: type=1105 audit(1769452242.070:824): pid=5323 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:42.071000 audit[5327]: CRED_ACQ pid=5327 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:42.106932 kernel: audit: type=1103 audit(1769452242.071:825): pid=5327 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:42.227592 sshd[5327]: Connection closed by 10.0.0.1 port 36344 Jan 26 18:30:42.229018 sshd-session[5323]: pam_unix(sshd:session): session closed for user core Jan 26 18:30:42.228000 audit[5323]: USER_END pid=5323 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:42.229000 audit[5323]: CRED_DISP pid=5323 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:42.256294 systemd[1]: sshd@18-10.0.0.106:22-10.0.0.1:36344.service: Deactivated successfully. Jan 26 18:30:42.259169 systemd[1]: session-20.scope: Deactivated successfully. Jan 26 18:30:42.260963 systemd-logind[1580]: Session 20 logged out. Waiting for processes to exit. Jan 26 18:30:42.268216 kernel: audit: type=1106 audit(1769452242.228:826): pid=5323 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:42.268274 kernel: audit: type=1104 audit(1769452242.229:827): pid=5323 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:42.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.106:22-10.0.0.1:36344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:42.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.106:22-10.0.0.1:36346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:42.267248 systemd[1]: Started sshd@19-10.0.0.106:22-10.0.0.1:36346.service - OpenSSH per-connection server daemon (10.0.0.1:36346). Jan 26 18:30:42.269981 systemd-logind[1580]: Removed session 20. Jan 26 18:30:42.364000 audit[5343]: USER_ACCT pid=5343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:42.366937 sshd[5343]: Accepted publickey for core from 10.0.0.1 port 36346 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:30:42.365000 audit[5343]: CRED_ACQ pid=5343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:42.365000 audit[5343]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2fe73b70 a2=3 a3=0 items=0 ppid=1 pid=5343 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:30:42.365000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:30:42.368668 sshd-session[5343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:30:42.377437 systemd-logind[1580]: New session 21 of user core. Jan 26 18:30:42.386666 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 26 18:30:42.390000 audit[5343]: USER_START pid=5343 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:42.394000 audit[5347]: CRED_ACQ pid=5347 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:42.598448 sshd[5347]: Connection closed by 10.0.0.1 port 36346 Jan 26 18:30:42.598724 sshd-session[5343]: pam_unix(sshd:session): session closed for user core Jan 26 18:30:42.601000 audit[5343]: USER_END pid=5343 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:42.601000 audit[5343]: CRED_DISP pid=5343 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:42.612035 systemd[1]: Started sshd@20-10.0.0.106:22-10.0.0.1:42468.service - OpenSSH per-connection server daemon (10.0.0.1:42468). Jan 26 18:30:42.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.106:22-10.0.0.1:42468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:42.613054 systemd[1]: sshd@19-10.0.0.106:22-10.0.0.1:36346.service: Deactivated successfully. Jan 26 18:30:42.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.106:22-10.0.0.1:36346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:42.617284 systemd[1]: session-21.scope: Deactivated successfully. Jan 26 18:30:42.620234 systemd-logind[1580]: Session 21 logged out. Waiting for processes to exit. Jan 26 18:30:42.626407 systemd-logind[1580]: Removed session 21. Jan 26 18:30:42.693000 audit[5356]: USER_ACCT pid=5356 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:42.695672 sshd[5356]: Accepted publickey for core from 10.0.0.1 port 42468 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:30:42.695000 audit[5356]: CRED_ACQ pid=5356 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:42.695000 audit[5356]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0c9d1940 a2=3 a3=0 items=0 ppid=1 pid=5356 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:30:42.695000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:30:42.698290 sshd-session[5356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:30:42.706219 systemd-logind[1580]: New session 22 of user core. Jan 26 18:30:42.712461 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 26 18:30:42.717000 audit[5356]: USER_START pid=5356 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:42.720000 audit[5363]: CRED_ACQ pid=5363 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:42.844937 sshd[5363]: Connection closed by 10.0.0.1 port 42468 Jan 26 18:30:42.845414 sshd-session[5356]: pam_unix(sshd:session): session closed for user core Jan 26 18:30:42.845000 audit[5356]: USER_END pid=5356 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:42.846000 audit[5356]: CRED_DISP pid=5356 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:42.851677 systemd-logind[1580]: Session 22 logged out. Waiting for processes to exit. Jan 26 18:30:42.851999 systemd[1]: sshd@20-10.0.0.106:22-10.0.0.1:42468.service: Deactivated successfully. Jan 26 18:30:42.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.106:22-10.0.0.1:42468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:42.854957 systemd[1]: session-22.scope: Deactivated successfully. Jan 26 18:30:42.858637 systemd-logind[1580]: Removed session 22. Jan 26 18:30:43.438993 kubelet[2838]: E0126 18:30:43.438905 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:30:43.443938 kubelet[2838]: E0126 18:30:43.441728 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-6fjn2" podUID="7718a3ef-224e-406c-b2ab-a63644f74c0b" Jan 26 18:30:43.443938 kubelet[2838]: E0126 18:30:43.441950 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-dbcn4" podUID="bcefd4f3-4cd3-4d24-b71b-627a7a3ce855" Jan 26 18:30:44.437615 kubelet[2838]: E0126 18:30:44.437560 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56d588489d-lsq6l" podUID="5f250e57-76e7-4282-9d3b-aa7149c84f3a" Jan 26 18:30:46.442183 kubelet[2838]: E0126 18:30:46.442056 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64656d67bd-dn9xb" podUID="17a5ef19-3fa0-4382-add3-2ce06b88ea33" Jan 26 18:30:47.863468 systemd[1]: Started sshd@21-10.0.0.106:22-10.0.0.1:42476.service - OpenSSH per-connection server daemon (10.0.0.1:42476). Jan 26 18:30:47.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.106:22-10.0.0.1:42476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:47.872635 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 26 18:30:47.872892 kernel: audit: type=1130 audit(1769452247.862:847): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.106:22-10.0.0.1:42476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:47.990000 audit[5376]: USER_ACCT pid=5376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:47.993196 sshd[5376]: Accepted publickey for core from 10.0.0.1 port 42476 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:30:47.997163 sshd-session[5376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:30:48.009633 systemd-logind[1580]: New session 23 of user core. Jan 26 18:30:48.016931 kernel: audit: type=1101 audit(1769452247.990:848): pid=5376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:47.992000 audit[5376]: CRED_ACQ pid=5376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:48.019179 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 26 18:30:48.039913 kernel: audit: type=1103 audit(1769452247.992:849): pid=5376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:48.057899 kernel: audit: type=1006 audit(1769452247.992:850): pid=5376 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 26 18:30:47.992000 audit[5376]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcad74be40 a2=3 a3=0 items=0 ppid=1 pid=5376 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:30:47.992000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:30:48.089689 kernel: audit: type=1300 audit(1769452247.992:850): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcad74be40 a2=3 a3=0 items=0 ppid=1 pid=5376 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:30:48.089892 kernel: audit: type=1327 audit(1769452247.992:850): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:30:48.089938 kernel: audit: type=1105 audit(1769452248.026:851): pid=5376 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:48.026000 audit[5376]: USER_START pid=5376 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:48.031000 audit[5380]: CRED_ACQ pid=5380 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:48.133207 kernel: audit: type=1103 audit(1769452248.031:852): pid=5380 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:48.205561 sshd[5380]: Connection closed by 10.0.0.1 port 42476 Jan 26 18:30:48.206554 sshd-session[5376]: pam_unix(sshd:session): session closed for user core Jan 26 18:30:48.207000 audit[5376]: USER_END pid=5376 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:48.217641 systemd-logind[1580]: Session 23 logged out. Waiting for processes to exit. Jan 26 18:30:48.219172 systemd[1]: sshd@21-10.0.0.106:22-10.0.0.1:42476.service: Deactivated successfully. Jan 26 18:30:48.225256 systemd[1]: session-23.scope: Deactivated successfully. Jan 26 18:30:48.235018 kernel: audit: type=1106 audit(1769452248.207:853): pid=5376 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:48.235082 kernel: audit: type=1104 audit(1769452248.207:854): pid=5376 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:48.207000 audit[5376]: CRED_DISP pid=5376 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:48.239406 systemd-logind[1580]: Removed session 23. Jan 26 18:30:48.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.106:22-10.0.0.1:42476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:48.445414 kubelet[2838]: E0126 18:30:48.445260 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gzr9m" podUID="e99188ce-3ac3-4524-8689-b68793ad3ef1" Jan 26 18:30:51.443567 kubelet[2838]: E0126 18:30:51.441732 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zw8p" podUID="fb70354b-2e8e-4b1e-823d-0f04eedecec2" Jan 26 18:30:52.438293 kubelet[2838]: E0126 18:30:52.437932 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:30:53.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.106:22-10.0.0.1:41268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:53.236051 systemd[1]: Started sshd@22-10.0.0.106:22-10.0.0.1:41268.service - OpenSSH per-connection server daemon (10.0.0.1:41268). Jan 26 18:30:53.241898 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 26 18:30:53.242033 kernel: audit: type=1130 audit(1769452253.234:856): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.106:22-10.0.0.1:41268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:53.357000 audit[5426]: USER_ACCT pid=5426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:53.359690 sshd[5426]: Accepted publickey for core from 10.0.0.1 port 41268 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:30:53.370230 sshd-session[5426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:30:53.387032 kernel: audit: type=1101 audit(1769452253.357:857): pid=5426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:53.363000 audit[5426]: CRED_ACQ pid=5426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:53.391595 systemd-logind[1580]: New session 24 of user core. Jan 26 18:30:53.415027 kernel: audit: type=1103 audit(1769452253.363:858): pid=5426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:53.415211 kernel: audit: type=1006 audit(1769452253.364:859): pid=5426 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 26 18:30:53.425943 kernel: audit: type=1300 audit(1769452253.364:859): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3fe97950 a2=3 a3=0 items=0 ppid=1 pid=5426 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:30:53.364000 audit[5426]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3fe97950 a2=3 a3=0 items=0 ppid=1 pid=5426 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:30:53.364000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:30:53.453598 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 26 18:30:53.470891 kernel: audit: type=1327 audit(1769452253.364:859): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:30:53.473000 audit[5426]: USER_START pid=5426 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:53.506160 kernel: audit: type=1105 audit(1769452253.473:860): pid=5426 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:53.482000 audit[5430]: CRED_ACQ pid=5430 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:53.534214 kernel: audit: type=1103 audit(1769452253.482:861): pid=5430 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:53.723839 sshd[5430]: Connection closed by 10.0.0.1 port 41268 Jan 26 18:30:53.723662 sshd-session[5426]: pam_unix(sshd:session): session closed for user core Jan 26 18:30:53.726000 audit[5426]: USER_END pid=5426 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:53.732644 systemd[1]: sshd@22-10.0.0.106:22-10.0.0.1:41268.service: Deactivated successfully. Jan 26 18:30:53.738907 systemd[1]: session-24.scope: Deactivated successfully. Jan 26 18:30:53.742014 systemd-logind[1580]: Session 24 logged out. Waiting for processes to exit. Jan 26 18:30:53.744423 systemd-logind[1580]: Removed session 24. Jan 26 18:30:53.727000 audit[5426]: CRED_DISP pid=5426 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:53.779506 kernel: audit: type=1106 audit(1769452253.726:862): pid=5426 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:53.779651 kernel: audit: type=1104 audit(1769452253.727:863): pid=5426 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:53.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.106:22-10.0.0.1:41268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:54.439102 kubelet[2838]: E0126 18:30:54.439053 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-6fjn2" podUID="7718a3ef-224e-406c-b2ab-a63644f74c0b" Jan 26 18:30:55.454898 kubelet[2838]: E0126 18:30:55.454165 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56d588489d-lsq6l" podUID="5f250e57-76e7-4282-9d3b-aa7149c84f3a" Jan 26 18:30:57.448947 kubelet[2838]: E0126 18:30:57.448692 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-dbcn4" podUID="bcefd4f3-4cd3-4d24-b71b-627a7a3ce855" Jan 26 18:30:58.455074 kubelet[2838]: E0126 18:30:58.454968 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64656d67bd-dn9xb" podUID="17a5ef19-3fa0-4382-add3-2ce06b88ea33" Jan 26 18:30:58.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.106:22-10.0.0.1:41278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:58.756269 systemd[1]: Started sshd@23-10.0.0.106:22-10.0.0.1:41278.service - OpenSSH per-connection server daemon (10.0.0.1:41278). Jan 26 18:30:58.761950 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 26 18:30:58.762015 kernel: audit: type=1130 audit(1769452258.754:865): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.106:22-10.0.0.1:41278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:58.893000 audit[5445]: USER_ACCT pid=5445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:58.895699 sshd[5445]: Accepted publickey for core from 10.0.0.1 port 41278 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:30:58.900596 sshd-session[5445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:30:58.917533 systemd-logind[1580]: New session 25 of user core. Jan 26 18:30:58.898000 audit[5445]: CRED_ACQ pid=5445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:58.958050 kernel: audit: type=1101 audit(1769452258.893:866): pid=5445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:58.958183 kernel: audit: type=1103 audit(1769452258.898:867): pid=5445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:58.975052 kernel: audit: type=1006 audit(1769452258.898:868): pid=5445 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 26 18:30:58.898000 audit[5445]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe528fd80 a2=3 a3=0 items=0 ppid=1 pid=5445 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:30:58.898000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:30:59.016886 kernel: audit: type=1300 audit(1769452258.898:868): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe528fd80 a2=3 a3=0 items=0 ppid=1 pid=5445 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:30:59.016971 kernel: audit: type=1327 audit(1769452258.898:868): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:30:59.024498 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 26 18:30:59.056000 audit[5445]: USER_START pid=5445 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:59.061000 audit[5449]: CRED_ACQ pid=5449 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:59.102698 kernel: audit: type=1105 audit(1769452259.056:869): pid=5445 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:59.102942 kernel: audit: type=1103 audit(1769452259.061:870): pid=5449 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:59.344697 sshd[5449]: Connection closed by 10.0.0.1 port 41278 Jan 26 18:30:59.348231 sshd-session[5445]: pam_unix(sshd:session): session closed for user core Jan 26 18:30:59.353000 audit[5445]: USER_END pid=5445 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:59.360539 systemd-logind[1580]: Session 25 logged out. Waiting for processes to exit. Jan 26 18:30:59.362205 systemd[1]: sshd@23-10.0.0.106:22-10.0.0.1:41278.service: Deactivated successfully. Jan 26 18:30:59.368088 systemd[1]: session-25.scope: Deactivated successfully. Jan 26 18:30:59.374017 systemd-logind[1580]: Removed session 25. Jan 26 18:30:59.380216 kernel: audit: type=1106 audit(1769452259.353:871): pid=5445 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:59.380385 kernel: audit: type=1104 audit(1769452259.353:872): pid=5445 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:59.353000 audit[5445]: CRED_DISP pid=5445 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:30:59.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.106:22-10.0.0.1:41278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:30:59.442474 kubelet[2838]: E0126 18:30:59.441429 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gzr9m" podUID="e99188ce-3ac3-4524-8689-b68793ad3ef1" Jan 26 18:31:03.441659 kubelet[2838]: E0126 18:31:03.440700 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zw8p" podUID="fb70354b-2e8e-4b1e-823d-0f04eedecec2" Jan 26 18:31:04.373654 systemd[1]: Started sshd@24-10.0.0.106:22-10.0.0.1:51742.service - OpenSSH per-connection server daemon (10.0.0.1:51742). Jan 26 18:31:04.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.106:22-10.0.0.1:51742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:04.380526 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 26 18:31:04.380576 kernel: audit: type=1130 audit(1769452264.373:874): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.106:22-10.0.0.1:51742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:04.516000 audit[5465]: USER_ACCT pid=5465 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:04.518516 sshd[5465]: Accepted publickey for core from 10.0.0.1 port 51742 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:31:04.521981 sshd-session[5465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:31:04.533715 systemd-logind[1580]: New session 26 of user core. Jan 26 18:31:04.538162 kernel: audit: type=1101 audit(1769452264.516:875): pid=5465 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:04.539946 kernel: audit: type=1103 audit(1769452264.519:876): pid=5465 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:04.519000 audit[5465]: CRED_ACQ pid=5465 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:04.568952 kernel: audit: type=1006 audit(1769452264.519:877): pid=5465 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 26 18:31:04.569071 kernel: audit: type=1300 audit(1769452264.519:877): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd01ea0690 a2=3 a3=0 items=0 ppid=1 pid=5465 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:04.519000 audit[5465]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd01ea0690 a2=3 a3=0 items=0 ppid=1 pid=5465 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:04.519000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:31:04.606966 kernel: audit: type=1327 audit(1769452264.519:877): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:31:04.608478 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 26 18:31:04.616000 audit[5465]: USER_START pid=5465 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:04.643922 kernel: audit: type=1105 audit(1769452264.616:878): pid=5465 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:04.622000 audit[5469]: CRED_ACQ pid=5469 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:04.665985 kernel: audit: type=1103 audit(1769452264.622:879): pid=5469 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:04.833685 sshd[5469]: Connection closed by 10.0.0.1 port 51742 Jan 26 18:31:04.835096 sshd-session[5465]: pam_unix(sshd:session): session closed for user core Jan 26 18:31:04.838000 audit[5465]: USER_END pid=5465 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:04.844239 systemd[1]: sshd@24-10.0.0.106:22-10.0.0.1:51742.service: Deactivated successfully. Jan 26 18:31:04.848707 systemd[1]: session-26.scope: Deactivated successfully. Jan 26 18:31:04.854080 systemd-logind[1580]: Session 26 logged out. Waiting for processes to exit. Jan 26 18:31:04.857527 systemd-logind[1580]: Removed session 26. Jan 26 18:31:04.838000 audit[5465]: CRED_DISP pid=5465 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:04.889684 kernel: audit: type=1106 audit(1769452264.838:880): pid=5465 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:04.889983 kernel: audit: type=1104 audit(1769452264.838:881): pid=5465 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:04.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.106:22-10.0.0.1:51742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:06.449932 kubelet[2838]: E0126 18:31:06.449434 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56d588489d-lsq6l" podUID="5f250e57-76e7-4282-9d3b-aa7149c84f3a" Jan 26 18:31:09.448974 kubelet[2838]: E0126 18:31:09.448596 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-6fjn2" podUID="7718a3ef-224e-406c-b2ab-a63644f74c0b" Jan 26 18:31:09.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.106:22-10.0.0.1:51744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:09.858204 systemd[1]: Started sshd@25-10.0.0.106:22-10.0.0.1:51744.service - OpenSSH per-connection server daemon (10.0.0.1:51744). Jan 26 18:31:09.864822 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 26 18:31:09.864868 kernel: audit: type=1130 audit(1769452269.857:883): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.106:22-10.0.0.1:51744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:09.972000 audit[5482]: USER_ACCT pid=5482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:09.976550 sshd[5482]: Accepted publickey for core from 10.0.0.1 port 51744 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:31:09.986941 sshd-session[5482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:31:09.997008 kernel: audit: type=1101 audit(1769452269.972:884): pid=5482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:09.997060 kernel: audit: type=1103 audit(1769452269.980:885): pid=5482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:09.980000 audit[5482]: CRED_ACQ pid=5482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:09.999969 systemd-logind[1580]: New session 27 of user core. Jan 26 18:31:10.031145 kernel: audit: type=1006 audit(1769452269.980:886): pid=5482 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 26 18:31:10.031227 kernel: audit: type=1300 audit(1769452269.980:886): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffda05c7d0 a2=3 a3=0 items=0 ppid=1 pid=5482 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:09.980000 audit[5482]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffda05c7d0 a2=3 a3=0 items=0 ppid=1 pid=5482 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:10.066928 kernel: audit: type=1327 audit(1769452269.980:886): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:31:09.980000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:31:10.067996 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 26 18:31:10.078000 audit[5482]: USER_START pid=5482 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:10.083000 audit[5486]: CRED_ACQ pid=5486 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:10.145621 kernel: audit: type=1105 audit(1769452270.078:887): pid=5482 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:10.146026 kernel: audit: type=1103 audit(1769452270.083:888): pid=5486 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:10.325118 sshd[5486]: Connection closed by 10.0.0.1 port 51744 Jan 26 18:31:10.325941 sshd-session[5482]: pam_unix(sshd:session): session closed for user core Jan 26 18:31:10.328000 audit[5482]: USER_END pid=5482 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:10.338480 systemd[1]: sshd@25-10.0.0.106:22-10.0.0.1:51744.service: Deactivated successfully. Jan 26 18:31:10.342536 systemd[1]: session-27.scope: Deactivated successfully. Jan 26 18:31:10.355676 systemd-logind[1580]: Session 27 logged out. Waiting for processes to exit. Jan 26 18:31:10.357592 systemd-logind[1580]: Removed session 27. Jan 26 18:31:10.328000 audit[5482]: CRED_DISP pid=5482 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:10.391096 kernel: audit: type=1106 audit(1769452270.328:889): pid=5482 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:10.391187 kernel: audit: type=1104 audit(1769452270.328:890): pid=5482 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:10.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.106:22-10.0.0.1:51744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:10.513934 kubelet[2838]: E0126 18:31:10.513699 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64656d67bd-dn9xb" podUID="17a5ef19-3fa0-4382-add3-2ce06b88ea33" Jan 26 18:31:10.517032 kubelet[2838]: E0126 18:31:10.516991 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-dbcn4" podUID="bcefd4f3-4cd3-4d24-b71b-627a7a3ce855" Jan 26 18:31:12.454689 kubelet[2838]: E0126 18:31:12.454611 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gzr9m" podUID="e99188ce-3ac3-4524-8689-b68793ad3ef1" Jan 26 18:31:15.357580 systemd[1]: Started sshd@26-10.0.0.106:22-10.0.0.1:32862.service - OpenSSH per-connection server daemon (10.0.0.1:32862). Jan 26 18:31:15.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.106:22-10.0.0.1:32862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:15.367016 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 26 18:31:15.367097 kernel: audit: type=1130 audit(1769452275.357:892): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.106:22-10.0.0.1:32862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:15.448048 kubelet[2838]: E0126 18:31:15.446079 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zw8p" podUID="fb70354b-2e8e-4b1e-823d-0f04eedecec2" Jan 26 18:31:15.474000 audit[5499]: USER_ACCT pid=5499 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:15.477201 sshd[5499]: Accepted publickey for core from 10.0.0.1 port 32862 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:31:15.479178 sshd-session[5499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:31:15.489504 systemd-logind[1580]: New session 28 of user core. Jan 26 18:31:15.476000 audit[5499]: CRED_ACQ pid=5499 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:15.505843 kernel: audit: type=1101 audit(1769452275.474:893): pid=5499 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:15.505880 kernel: audit: type=1103 audit(1769452275.476:894): pid=5499 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:15.551950 kernel: audit: type=1006 audit(1769452275.476:895): pid=5499 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 26 18:31:15.476000 audit[5499]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8a4f6550 a2=3 a3=0 items=0 ppid=1 pid=5499 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:15.553248 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 26 18:31:15.582980 kernel: audit: type=1300 audit(1769452275.476:895): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8a4f6550 a2=3 a3=0 items=0 ppid=1 pid=5499 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:15.594298 kernel: audit: type=1327 audit(1769452275.476:895): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:31:15.476000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:31:15.626062 kernel: audit: type=1105 audit(1769452275.563:896): pid=5499 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:15.563000 audit[5499]: USER_START pid=5499 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:15.649484 kernel: audit: type=1103 audit(1769452275.574:897): pid=5503 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:15.574000 audit[5503]: CRED_ACQ pid=5503 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:15.744547 sshd[5503]: Connection closed by 10.0.0.1 port 32862 Jan 26 18:31:15.745228 sshd-session[5499]: pam_unix(sshd:session): session closed for user core Jan 26 18:31:15.746000 audit[5499]: USER_END pid=5499 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:15.751001 systemd[1]: sshd@26-10.0.0.106:22-10.0.0.1:32862.service: Deactivated successfully. Jan 26 18:31:15.754455 systemd[1]: session-28.scope: Deactivated successfully. Jan 26 18:31:15.757510 systemd-logind[1580]: Session 28 logged out. Waiting for processes to exit. Jan 26 18:31:15.760610 systemd-logind[1580]: Removed session 28. Jan 26 18:31:15.746000 audit[5499]: CRED_DISP pid=5499 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:15.793948 kernel: audit: type=1106 audit(1769452275.746:898): pid=5499 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:15.794144 kernel: audit: type=1104 audit(1769452275.746:899): pid=5499 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:15.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.106:22-10.0.0.1:32862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:18.440497 kubelet[2838]: E0126 18:31:18.440295 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56d588489d-lsq6l" podUID="5f250e57-76e7-4282-9d3b-aa7149c84f3a" Jan 26 18:31:20.761544 systemd[1]: Started sshd@27-10.0.0.106:22-10.0.0.1:32872.service - OpenSSH per-connection server daemon (10.0.0.1:32872). Jan 26 18:31:20.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.106:22-10.0.0.1:32872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:20.767697 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 26 18:31:20.767921 kernel: audit: type=1130 audit(1769452280.760:901): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.106:22-10.0.0.1:32872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:20.899889 sshd[5544]: Accepted publickey for core from 10.0.0.1 port 32872 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:31:20.898000 audit[5544]: USER_ACCT pid=5544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:20.903263 sshd-session[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:31:20.920243 systemd-logind[1580]: New session 29 of user core. Jan 26 18:31:20.899000 audit[5544]: CRED_ACQ pid=5544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:20.955078 kernel: audit: type=1101 audit(1769452280.898:902): pid=5544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:20.955208 kernel: audit: type=1103 audit(1769452280.899:903): pid=5544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:20.955254 kernel: audit: type=1006 audit(1769452280.899:904): pid=5544 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 26 18:31:20.968701 kernel: audit: type=1300 audit(1769452280.899:904): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff6aabe00 a2=3 a3=0 items=0 ppid=1 pid=5544 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:20.899000 audit[5544]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff6aabe00 a2=3 a3=0 items=0 ppid=1 pid=5544 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:20.994461 kernel: audit: type=1327 audit(1769452280.899:904): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:31:20.899000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:31:20.995699 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 26 18:31:21.003000 audit[5544]: USER_START pid=5544 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:21.041023 kernel: audit: type=1105 audit(1769452281.003:905): pid=5544 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:21.003000 audit[5548]: CRED_ACQ pid=5548 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:21.061872 kernel: audit: type=1103 audit(1769452281.003:906): pid=5548 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:21.201122 sshd[5548]: Connection closed by 10.0.0.1 port 32872 Jan 26 18:31:21.202144 sshd-session[5544]: pam_unix(sshd:session): session closed for user core Jan 26 18:31:21.208000 audit[5544]: USER_END pid=5544 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:21.208000 audit[5544]: CRED_DISP pid=5544 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:21.248663 systemd[1]: sshd@27-10.0.0.106:22-10.0.0.1:32872.service: Deactivated successfully. Jan 26 18:31:21.255163 systemd[1]: session-29.scope: Deactivated successfully. Jan 26 18:31:21.257437 systemd-logind[1580]: Session 29 logged out. Waiting for processes to exit. Jan 26 18:31:21.265060 kernel: audit: type=1106 audit(1769452281.208:907): pid=5544 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:21.265139 kernel: audit: type=1104 audit(1769452281.208:908): pid=5544 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:21.265220 systemd[1]: Started sshd@28-10.0.0.106:22-10.0.0.1:32884.service - OpenSSH per-connection server daemon (10.0.0.1:32884). Jan 26 18:31:21.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.106:22-10.0.0.1:32872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:21.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.106:22-10.0.0.1:32884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:21.269015 systemd-logind[1580]: Removed session 29. Jan 26 18:31:21.371000 audit[5562]: USER_ACCT pid=5562 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:21.372529 sshd[5562]: Accepted publickey for core from 10.0.0.1 port 32884 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:31:21.378167 sshd-session[5562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:31:21.375000 audit[5562]: CRED_ACQ pid=5562 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:21.375000 audit[5562]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd259b8440 a2=3 a3=0 items=0 ppid=1 pid=5562 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:21.375000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:31:21.394217 systemd-logind[1580]: New session 30 of user core. Jan 26 18:31:21.404102 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 26 18:31:21.413000 audit[5562]: USER_START pid=5562 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:21.419000 audit[5566]: CRED_ACQ pid=5566 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:21.948061 sshd[5566]: Connection closed by 10.0.0.1 port 32884 Jan 26 18:31:21.950933 sshd-session[5562]: pam_unix(sshd:session): session closed for user core Jan 26 18:31:21.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.106:22-10.0.0.1:32894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:21.962438 systemd[1]: Started sshd@29-10.0.0.106:22-10.0.0.1:32894.service - OpenSSH per-connection server daemon (10.0.0.1:32894). Jan 26 18:31:21.974000 audit[5562]: USER_END pid=5562 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:21.975000 audit[5562]: CRED_DISP pid=5562 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:21.981616 systemd-logind[1580]: Session 30 logged out. Waiting for processes to exit. Jan 26 18:31:21.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.106:22-10.0.0.1:32884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:21.982453 systemd[1]: sshd@28-10.0.0.106:22-10.0.0.1:32884.service: Deactivated successfully. Jan 26 18:31:21.988047 systemd[1]: session-30.scope: Deactivated successfully. Jan 26 18:31:22.001028 systemd-logind[1580]: Removed session 30. Jan 26 18:31:22.147000 audit[5577]: USER_ACCT pid=5577 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:22.148894 sshd[5577]: Accepted publickey for core from 10.0.0.1 port 32894 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:31:22.149000 audit[5577]: CRED_ACQ pid=5577 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:22.149000 audit[5577]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefb6b3a00 a2=3 a3=0 items=0 ppid=1 pid=5577 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:22.149000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:31:22.151991 sshd-session[5577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:31:22.163290 systemd-logind[1580]: New session 31 of user core. Jan 26 18:31:22.173261 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 26 18:31:22.179000 audit[5577]: USER_START pid=5577 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:22.186000 audit[5584]: CRED_ACQ pid=5584 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:22.438274 kubelet[2838]: E0126 18:31:22.438150 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-dbcn4" podUID="bcefd4f3-4cd3-4d24-b71b-627a7a3ce855" Jan 26 18:31:22.438274 kubelet[2838]: E0126 18:31:22.438238 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-6fjn2" podUID="7718a3ef-224e-406c-b2ab-a63644f74c0b" Jan 26 18:31:22.445137 kubelet[2838]: E0126 18:31:22.445081 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64656d67bd-dn9xb" podUID="17a5ef19-3fa0-4382-add3-2ce06b88ea33" Jan 26 18:31:23.281847 sshd[5584]: Connection closed by 10.0.0.1 port 32894 Jan 26 18:31:23.284057 sshd-session[5577]: pam_unix(sshd:session): session closed for user core Jan 26 18:31:23.286000 audit[5577]: USER_END pid=5577 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:23.286000 audit[5577]: CRED_DISP pid=5577 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:23.296586 systemd[1]: sshd@29-10.0.0.106:22-10.0.0.1:32894.service: Deactivated successfully. Jan 26 18:31:23.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.106:22-10.0.0.1:32894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:23.307238 systemd[1]: session-31.scope: Deactivated successfully. Jan 26 18:31:23.319129 systemd-logind[1580]: Session 31 logged out. Waiting for processes to exit. Jan 26 18:31:23.319000 audit[5600]: NETFILTER_CFG table=filter:145 family=2 entries=26 op=nft_register_rule pid=5600 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:31:23.319000 audit[5600]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe2cfb8220 a2=0 a3=7ffe2cfb820c items=0 ppid=3000 pid=5600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:23.319000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:31:23.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.106:22-10.0.0.1:44752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:23.326564 systemd[1]: Started sshd@30-10.0.0.106:22-10.0.0.1:44752.service - OpenSSH per-connection server daemon (10.0.0.1:44752). Jan 26 18:31:23.331057 systemd-logind[1580]: Removed session 31. Jan 26 18:31:23.335000 audit[5600]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=5600 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:31:23.335000 audit[5600]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe2cfb8220 a2=0 a3=0 items=0 ppid=3000 pid=5600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:23.335000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:31:23.417000 audit[5608]: NETFILTER_CFG table=filter:147 family=2 entries=38 op=nft_register_rule pid=5608 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:31:23.417000 audit[5608]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe3e5dbb30 a2=0 a3=7ffe3e5dbb1c items=0 ppid=3000 pid=5608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:23.417000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:31:23.424000 audit[5608]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=5608 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:31:23.424000 audit[5608]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe3e5dbb30 a2=0 a3=0 items=0 ppid=3000 pid=5608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:23.424000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:31:23.461000 audit[5604]: USER_ACCT pid=5604 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:23.462596 sshd[5604]: Accepted publickey for core from 10.0.0.1 port 44752 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:31:23.462000 audit[5604]: CRED_ACQ pid=5604 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:23.463000 audit[5604]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9c35b160 a2=3 a3=0 items=0 ppid=1 pid=5604 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:23.463000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:31:23.465522 sshd-session[5604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:31:23.476560 systemd-logind[1580]: New session 32 of user core. Jan 26 18:31:23.491125 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 26 18:31:23.498000 audit[5604]: USER_START pid=5604 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:23.503000 audit[5610]: CRED_ACQ pid=5610 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:23.916660 sshd[5610]: Connection closed by 10.0.0.1 port 44752 Jan 26 18:31:23.917944 sshd-session[5604]: pam_unix(sshd:session): session closed for user core Jan 26 18:31:23.923000 audit[5604]: USER_END pid=5604 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:23.923000 audit[5604]: CRED_DISP pid=5604 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:23.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.106:22-10.0.0.1:44752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:23.934264 systemd[1]: sshd@30-10.0.0.106:22-10.0.0.1:44752.service: Deactivated successfully. Jan 26 18:31:23.939958 systemd[1]: session-32.scope: Deactivated successfully. Jan 26 18:31:23.949285 systemd-logind[1580]: Session 32 logged out. Waiting for processes to exit. Jan 26 18:31:23.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.106:22-10.0.0.1:44758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:23.954045 systemd[1]: Started sshd@31-10.0.0.106:22-10.0.0.1:44758.service - OpenSSH per-connection server daemon (10.0.0.1:44758). Jan 26 18:31:23.961107 systemd-logind[1580]: Removed session 32. Jan 26 18:31:24.074000 audit[5622]: USER_ACCT pid=5622 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:24.076056 sshd[5622]: Accepted publickey for core from 10.0.0.1 port 44758 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:31:24.078000 audit[5622]: CRED_ACQ pid=5622 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:24.078000 audit[5622]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe88d1e220 a2=3 a3=0 items=0 ppid=1 pid=5622 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:24.078000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:31:24.080868 sshd-session[5622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:31:24.093879 systemd-logind[1580]: New session 33 of user core. Jan 26 18:31:24.105121 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 26 18:31:24.118000 audit[5622]: USER_START pid=5622 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:24.122000 audit[5626]: CRED_ACQ pid=5626 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:24.357065 sshd[5626]: Connection closed by 10.0.0.1 port 44758 Jan 26 18:31:24.357959 sshd-session[5622]: pam_unix(sshd:session): session closed for user core Jan 26 18:31:24.361000 audit[5622]: USER_END pid=5622 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:24.363000 audit[5622]: CRED_DISP pid=5622 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:24.368735 systemd[1]: sshd@31-10.0.0.106:22-10.0.0.1:44758.service: Deactivated successfully. Jan 26 18:31:24.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.106:22-10.0.0.1:44758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:24.374888 systemd[1]: session-33.scope: Deactivated successfully. Jan 26 18:31:24.377213 systemd-logind[1580]: Session 33 logged out. Waiting for processes to exit. Jan 26 18:31:24.381929 systemd-logind[1580]: Removed session 33. Jan 26 18:31:27.443416 kubelet[2838]: E0126 18:31:27.442463 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gzr9m" podUID="e99188ce-3ac3-4524-8689-b68793ad3ef1" Jan 26 18:31:29.373469 systemd[1]: Started sshd@32-10.0.0.106:22-10.0.0.1:44766.service - OpenSSH per-connection server daemon (10.0.0.1:44766). Jan 26 18:31:29.389026 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 26 18:31:29.389162 kernel: audit: type=1130 audit(1769452289.373:950): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.106:22-10.0.0.1:44766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:29.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.106:22-10.0.0.1:44766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:29.441595 kubelet[2838]: E0126 18:31:29.441513 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zw8p" podUID="fb70354b-2e8e-4b1e-823d-0f04eedecec2" Jan 26 18:31:29.448983 kubelet[2838]: E0126 18:31:29.446133 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:31:29.523000 audit[5640]: USER_ACCT pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:29.528725 sshd-session[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:31:29.530436 sshd[5640]: Accepted publickey for core from 10.0.0.1 port 44766 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:31:29.545659 systemd-logind[1580]: New session 34 of user core. Jan 26 18:31:29.556087 kernel: audit: type=1101 audit(1769452289.523:951): pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:29.525000 audit[5640]: CRED_ACQ pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:29.587921 kernel: audit: type=1103 audit(1769452289.525:952): pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:29.589909 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 26 18:31:29.525000 audit[5640]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe5f0e830 a2=3 a3=0 items=0 ppid=1 pid=5640 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:29.646005 kernel: audit: type=1006 audit(1769452289.525:953): pid=5640 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=34 res=1 Jan 26 18:31:29.646124 kernel: audit: type=1300 audit(1769452289.525:953): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe5f0e830 a2=3 a3=0 items=0 ppid=1 pid=5640 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:29.525000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:31:29.661891 kernel: audit: type=1327 audit(1769452289.525:953): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:31:29.608000 audit[5640]: USER_START pid=5640 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:29.706043 kernel: audit: type=1105 audit(1769452289.608:954): pid=5640 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:29.616000 audit[5646]: CRED_ACQ pid=5646 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:29.733985 kernel: audit: type=1103 audit(1769452289.616:955): pid=5646 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:29.824284 sshd[5646]: Connection closed by 10.0.0.1 port 44766 Jan 26 18:31:29.824515 sshd-session[5640]: pam_unix(sshd:session): session closed for user core Jan 26 18:31:29.827000 audit[5640]: USER_END pid=5640 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:29.834438 systemd[1]: sshd@32-10.0.0.106:22-10.0.0.1:44766.service: Deactivated successfully. Jan 26 18:31:29.842716 systemd[1]: session-34.scope: Deactivated successfully. Jan 26 18:31:29.850526 systemd-logind[1580]: Session 34 logged out. Waiting for processes to exit. Jan 26 18:31:29.852503 systemd-logind[1580]: Removed session 34. Jan 26 18:31:29.863970 kernel: audit: type=1106 audit(1769452289.827:956): pid=5640 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:29.828000 audit[5640]: CRED_DISP pid=5640 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:29.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.106:22-10.0.0.1:44766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:29.889979 kernel: audit: type=1104 audit(1769452289.828:957): pid=5640 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:31.448222 kubelet[2838]: E0126 18:31:31.446939 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56d588489d-lsq6l" podUID="5f250e57-76e7-4282-9d3b-aa7149c84f3a" Jan 26 18:31:32.437952 kubelet[2838]: E0126 18:31:32.437575 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:31:34.441279 kubelet[2838]: E0126 18:31:34.440730 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:31:34.858094 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 26 18:31:34.858217 kernel: audit: type=1130 audit(1769452294.845:959): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.106:22-10.0.0.1:40084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:34.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.106:22-10.0.0.1:40084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:34.846595 systemd[1]: Started sshd@33-10.0.0.106:22-10.0.0.1:40084.service - OpenSSH per-connection server daemon (10.0.0.1:40084). Jan 26 18:31:34.973000 audit[5669]: USER_ACCT pid=5669 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:34.975250 sshd[5669]: Accepted publickey for core from 10.0.0.1 port 40084 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:31:34.977706 sshd-session[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:31:34.990108 systemd-logind[1580]: New session 35 of user core. Jan 26 18:31:34.975000 audit[5669]: CRED_ACQ pid=5669 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:35.039922 kernel: audit: type=1101 audit(1769452294.973:960): pid=5669 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:35.040012 kernel: audit: type=1103 audit(1769452294.975:961): pid=5669 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:34.975000 audit[5669]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcef3663d0 a2=3 a3=0 items=0 ppid=1 pid=5669 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:35.093981 kernel: audit: type=1006 audit(1769452294.975:962): pid=5669 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=35 res=1 Jan 26 18:31:35.094039 kernel: audit: type=1300 audit(1769452294.975:962): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcef3663d0 a2=3 a3=0 items=0 ppid=1 pid=5669 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:35.094066 kernel: audit: type=1327 audit(1769452294.975:962): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:31:34.975000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:31:35.095942 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 26 18:31:35.106113 kernel: audit: type=1105 audit(1769452295.101:963): pid=5669 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:35.101000 audit[5669]: USER_START pid=5669 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:35.106000 audit[5673]: CRED_ACQ pid=5673 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:35.168195 kernel: audit: type=1103 audit(1769452295.106:964): pid=5673 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:35.333362 sshd[5673]: Connection closed by 10.0.0.1 port 40084 Jan 26 18:31:35.336255 sshd-session[5669]: pam_unix(sshd:session): session closed for user core Jan 26 18:31:35.337000 audit[5669]: USER_END pid=5669 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:35.341906 systemd[1]: sshd@33-10.0.0.106:22-10.0.0.1:40084.service: Deactivated successfully. Jan 26 18:31:35.345633 systemd[1]: session-35.scope: Deactivated successfully. Jan 26 18:31:35.352078 systemd-logind[1580]: Session 35 logged out. Waiting for processes to exit. Jan 26 18:31:35.353392 systemd-logind[1580]: Removed session 35. Jan 26 18:31:35.338000 audit[5669]: CRED_DISP pid=5669 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:35.409406 kernel: audit: type=1106 audit(1769452295.337:965): pid=5669 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:35.409584 kernel: audit: type=1104 audit(1769452295.338:966): pid=5669 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:35.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.106:22-10.0.0.1:40084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:35.440257 kubelet[2838]: E0126 18:31:35.439044 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-dbcn4" podUID="bcefd4f3-4cd3-4d24-b71b-627a7a3ce855" Jan 26 18:31:37.444898 containerd[1602]: time="2026-01-26T18:31:37.444422951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 26 18:31:37.445362 kubelet[2838]: E0126 18:31:37.445013 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-6fjn2" podUID="7718a3ef-224e-406c-b2ab-a63644f74c0b" Jan 26 18:31:37.538094 containerd[1602]: time="2026-01-26T18:31:37.537720720Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:31:37.540010 containerd[1602]: time="2026-01-26T18:31:37.539898465Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 26 18:31:37.540010 containerd[1602]: time="2026-01-26T18:31:37.539981451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 26 18:31:37.540628 kubelet[2838]: E0126 18:31:37.540335 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 26 18:31:37.540971 kubelet[2838]: E0126 18:31:37.540715 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 26 18:31:37.541155 kubelet[2838]: E0126 18:31:37.541039 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:dc422a360d1a4f58bab37439bea74799,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pmnl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64656d67bd-dn9xb_calico-system(17a5ef19-3fa0-4382-add3-2ce06b88ea33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 26 18:31:37.544219 containerd[1602]: time="2026-01-26T18:31:37.544150813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 26 18:31:37.606383 containerd[1602]: time="2026-01-26T18:31:37.606225033Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:31:37.609266 containerd[1602]: time="2026-01-26T18:31:37.609156491Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 26 18:31:37.609346 containerd[1602]: time="2026-01-26T18:31:37.609291843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 26 18:31:37.611819 kubelet[2838]: E0126 18:31:37.610939 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 26 18:31:37.611819 kubelet[2838]: E0126 18:31:37.611149 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 26 18:31:37.611819 kubelet[2838]: E0126 18:31:37.611288 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pmnl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64656d67bd-dn9xb_calico-system(17a5ef19-3fa0-4382-add3-2ce06b88ea33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 26 18:31:37.614158 kubelet[2838]: E0126 18:31:37.614121 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64656d67bd-dn9xb" podUID="17a5ef19-3fa0-4382-add3-2ce06b88ea33" Jan 26 18:31:38.147000 audit[5687]: NETFILTER_CFG table=filter:149 family=2 entries=26 op=nft_register_rule pid=5687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:31:38.147000 audit[5687]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc21a476b0 a2=0 a3=7ffc21a4769c items=0 ppid=3000 pid=5687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:38.147000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:31:38.165000 audit[5687]: NETFILTER_CFG table=nat:150 family=2 entries=104 op=nft_register_chain pid=5687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:31:38.165000 audit[5687]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc21a476b0 a2=0 a3=7ffc21a4769c items=0 ppid=3000 pid=5687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:38.165000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:31:40.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.106:22-10.0.0.1:40090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:40.362392 systemd[1]: Started sshd@34-10.0.0.106:22-10.0.0.1:40090.service - OpenSSH per-connection server daemon (10.0.0.1:40090). Jan 26 18:31:40.374106 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 26 18:31:40.374307 kernel: audit: type=1130 audit(1769452300.361:970): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.106:22-10.0.0.1:40090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:40.580000 audit[5689]: USER_ACCT pid=5689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:40.621002 kernel: audit: type=1101 audit(1769452300.580:971): pid=5689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:40.621118 kernel: audit: type=1103 audit(1769452300.582:972): pid=5689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:40.582000 audit[5689]: CRED_ACQ pid=5689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:40.586204 sshd-session[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:31:40.628429 sshd[5689]: Accepted publickey for core from 10.0.0.1 port 40090 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:31:40.582000 audit[5689]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdcec02510 a2=3 a3=0 items=0 ppid=1 pid=5689 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:40.687442 kernel: audit: type=1006 audit(1769452300.582:973): pid=5689 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=36 res=1 Jan 26 18:31:40.688451 kernel: audit: type=1300 audit(1769452300.582:973): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdcec02510 a2=3 a3=0 items=0 ppid=1 pid=5689 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:40.688603 kernel: audit: type=1327 audit(1769452300.582:973): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:31:40.582000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:31:40.706395 systemd-logind[1580]: New session 36 of user core. Jan 26 18:31:40.757360 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 26 18:31:40.766000 audit[5689]: USER_START pid=5689 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:40.772000 audit[5693]: CRED_ACQ pid=5693 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:40.829891 kernel: audit: type=1105 audit(1769452300.766:974): pid=5689 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:40.830012 kernel: audit: type=1103 audit(1769452300.772:975): pid=5693 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:41.043057 sshd[5693]: Connection closed by 10.0.0.1 port 40090 Jan 26 18:31:41.045220 sshd-session[5689]: pam_unix(sshd:session): session closed for user core Jan 26 18:31:41.047000 audit[5689]: USER_END pid=5689 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:41.060951 systemd[1]: sshd@34-10.0.0.106:22-10.0.0.1:40090.service: Deactivated successfully. Jan 26 18:31:41.069282 systemd[1]: session-36.scope: Deactivated successfully. Jan 26 18:31:41.047000 audit[5689]: CRED_DISP pid=5689 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:41.083159 systemd-logind[1580]: Session 36 logged out. Waiting for processes to exit. Jan 26 18:31:41.085166 systemd-logind[1580]: Removed session 36. Jan 26 18:31:41.110189 kernel: audit: type=1106 audit(1769452301.047:976): pid=5689 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:41.110284 kernel: audit: type=1104 audit(1769452301.047:977): pid=5689 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:41.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.106:22-10.0.0.1:40090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:42.441879 kubelet[2838]: E0126 18:31:42.441404 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gzr9m" podUID="e99188ce-3ac3-4524-8689-b68793ad3ef1" Jan 26 18:31:44.439206 kubelet[2838]: E0126 18:31:44.439142 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:31:44.453149 containerd[1602]: time="2026-01-26T18:31:44.452439980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 26 18:31:44.532322 containerd[1602]: time="2026-01-26T18:31:44.532230238Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:31:44.535146 containerd[1602]: time="2026-01-26T18:31:44.535025703Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 26 18:31:44.535224 containerd[1602]: time="2026-01-26T18:31:44.535173338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 26 18:31:44.536703 kubelet[2838]: E0126 18:31:44.536127 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 26 18:31:44.536703 kubelet[2838]: E0126 18:31:44.536178 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 26 18:31:44.536703 kubelet[2838]: E0126 18:31:44.536349 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqcn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8zw8p_calico-system(fb70354b-2e8e-4b1e-823d-0f04eedecec2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 26 18:31:44.538241 kubelet[2838]: E0126 18:31:44.538043 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zw8p" podUID="fb70354b-2e8e-4b1e-823d-0f04eedecec2" Jan 26 18:31:46.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.106:22-10.0.0.1:49382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:46.061605 systemd[1]: Started sshd@35-10.0.0.106:22-10.0.0.1:49382.service - OpenSSH per-connection server daemon (10.0.0.1:49382). Jan 26 18:31:46.070920 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 26 18:31:46.070998 kernel: audit: type=1130 audit(1769452306.061:979): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.106:22-10.0.0.1:49382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:46.234000 audit[5706]: USER_ACCT pid=5706 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:46.236121 sshd[5706]: Accepted publickey for core from 10.0.0.1 port 49382 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:31:46.242376 sshd-session[5706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:31:46.255289 systemd-logind[1580]: New session 37 of user core. Jan 26 18:31:46.238000 audit[5706]: CRED_ACQ pid=5706 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:46.306269 kernel: audit: type=1101 audit(1769452306.234:980): pid=5706 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:46.306403 kernel: audit: type=1103 audit(1769452306.238:981): pid=5706 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:46.306453 kernel: audit: type=1006 audit(1769452306.238:982): pid=5706 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=37 res=1 Jan 26 18:31:46.329113 kernel: audit: type=1300 audit(1769452306.238:982): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd672f4a30 a2=3 a3=0 items=0 ppid=1 pid=5706 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:46.238000 audit[5706]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd672f4a30 a2=3 a3=0 items=0 ppid=1 pid=5706 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:46.238000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:31:46.381364 kernel: audit: type=1327 audit(1769452306.238:982): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:31:46.383291 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 26 18:31:46.392000 audit[5706]: USER_START pid=5706 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:46.442002 kernel: audit: type=1105 audit(1769452306.392:983): pid=5706 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:46.442327 kernel: audit: type=1103 audit(1769452306.393:984): pid=5710 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:46.393000 audit[5710]: CRED_ACQ pid=5710 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:46.451926 kubelet[2838]: E0126 18:31:46.448716 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:31:46.452897 containerd[1602]: time="2026-01-26T18:31:46.452453867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 26 18:31:46.566920 containerd[1602]: time="2026-01-26T18:31:46.566667800Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:31:46.574416 containerd[1602]: time="2026-01-26T18:31:46.573344414Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 26 18:31:46.574416 containerd[1602]: time="2026-01-26T18:31:46.573457444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 26 18:31:46.577270 kubelet[2838]: E0126 18:31:46.576954 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 26 18:31:46.577270 kubelet[2838]: E0126 18:31:46.577015 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 26 18:31:46.577270 kubelet[2838]: E0126 18:31:46.577181 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vm9gk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-56d588489d-lsq6l_calico-system(5f250e57-76e7-4282-9d3b-aa7149c84f3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 26 18:31:46.581872 kubelet[2838]: E0126 18:31:46.578369 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56d588489d-lsq6l" podUID="5f250e57-76e7-4282-9d3b-aa7149c84f3a" Jan 26 18:31:46.701198 sshd[5710]: Connection closed by 10.0.0.1 port 49382 Jan 26 18:31:46.702264 sshd-session[5706]: pam_unix(sshd:session): session closed for user core Jan 26 18:31:46.705000 audit[5706]: USER_END pid=5706 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:46.715178 systemd[1]: sshd@35-10.0.0.106:22-10.0.0.1:49382.service: Deactivated successfully. Jan 26 18:31:46.716002 systemd-logind[1580]: Session 37 logged out. Waiting for processes to exit. Jan 26 18:31:46.725074 systemd[1]: session-37.scope: Deactivated successfully. Jan 26 18:31:46.731859 systemd-logind[1580]: Removed session 37. Jan 26 18:31:46.755602 kernel: audit: type=1106 audit(1769452306.705:985): pid=5706 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:46.755692 kernel: audit: type=1104 audit(1769452306.705:986): pid=5706 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:46.705000 audit[5706]: CRED_DISP pid=5706 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:46.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.106:22-10.0.0.1:49382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:49.466931 kubelet[2838]: E0126 18:31:49.461237 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64656d67bd-dn9xb" podUID="17a5ef19-3fa0-4382-add3-2ce06b88ea33" Jan 26 18:31:50.440969 kubelet[2838]: E0126 18:31:50.440497 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-dbcn4" podUID="bcefd4f3-4cd3-4d24-b71b-627a7a3ce855" Jan 26 18:31:50.446686 kubelet[2838]: E0126 18:31:50.446073 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:31:51.456245 containerd[1602]: time="2026-01-26T18:31:51.456195436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 26 18:31:51.537101 containerd[1602]: time="2026-01-26T18:31:51.536297480Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:31:51.538971 containerd[1602]: time="2026-01-26T18:31:51.538942033Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 26 18:31:51.539104 containerd[1602]: time="2026-01-26T18:31:51.539090129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 26 18:31:51.539471 kubelet[2838]: E0126 18:31:51.539436 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:31:51.540726 kubelet[2838]: E0126 18:31:51.540133 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:31:51.540726 kubelet[2838]: E0126 18:31:51.540277 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p6pl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8ccd88f4-6fjn2_calico-apiserver(7718a3ef-224e-406c-b2ab-a63644f74c0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 26 18:31:51.542071 kubelet[2838]: E0126 18:31:51.542048 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8ccd88f4-6fjn2" podUID="7718a3ef-224e-406c-b2ab-a63644f74c0b" Jan 26 18:31:51.732030 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 26 18:31:51.732130 kernel: audit: type=1130 audit(1769452311.720:988): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.106:22-10.0.0.1:49390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:51.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.106:22-10.0.0.1:49390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:31:51.721048 systemd[1]: Started sshd@36-10.0.0.106:22-10.0.0.1:49390.service - OpenSSH per-connection server daemon (10.0.0.1:49390). Jan 26 18:31:51.915000 audit[5751]: USER_ACCT pid=5751 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:51.918672 sshd[5751]: Accepted publickey for core from 10.0.0.1 port 49390 ssh2: RSA SHA256:zJcBDzJPqa/thi/sJxLw7uNiQAUVGK/FdGe7PALlYj4 Jan 26 18:31:51.921294 sshd-session[5751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:31:51.943110 systemd-logind[1580]: New session 38 of user core. Jan 26 18:31:51.950183 kernel: audit: type=1101 audit(1769452311.915:989): pid=5751 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:51.918000 audit[5751]: CRED_ACQ pid=5751 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:51.955001 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 26 18:31:51.980237 kernel: audit: type=1103 audit(1769452311.918:990): pid=5751 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:51.980304 kernel: audit: type=1006 audit(1769452311.918:991): pid=5751 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=38 res=1 Jan 26 18:31:51.918000 audit[5751]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc24d10710 a2=3 a3=0 items=0 ppid=1 pid=5751 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:52.038932 kernel: audit: type=1300 audit(1769452311.918:991): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc24d10710 a2=3 a3=0 items=0 ppid=1 pid=5751 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:31:51.918000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:31:52.054059 kernel: audit: type=1327 audit(1769452311.918:991): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:31:51.965000 audit[5751]: USER_START pid=5751 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:52.105937 kernel: audit: type=1105 audit(1769452311.965:992): pid=5751 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:51.971000 audit[5755]: CRED_ACQ pid=5755 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:52.140095 kernel: audit: type=1103 audit(1769452311.971:993): pid=5755 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:52.275159 sshd[5755]: Connection closed by 10.0.0.1 port 49390 Jan 26 18:31:52.278029 sshd-session[5751]: pam_unix(sshd:session): session closed for user core Jan 26 18:31:52.282000 audit[5751]: USER_END pid=5751 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:52.306374 systemd[1]: sshd@36-10.0.0.106:22-10.0.0.1:49390.service: Deactivated successfully. Jan 26 18:31:52.312045 systemd[1]: session-38.scope: Deactivated successfully. Jan 26 18:31:52.332172 kernel: audit: type=1106 audit(1769452312.282:994): pid=5751 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:52.332265 kernel: audit: type=1104 audit(1769452312.282:995): pid=5751 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:52.282000 audit[5751]: CRED_DISP pid=5751 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:31:52.333387 systemd-logind[1580]: Session 38 logged out. Waiting for processes to exit. Jan 26 18:31:52.336068 systemd-logind[1580]: Removed session 38. Jan 26 18:31:52.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.106:22-10.0.0.1:49390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'